Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vulkan: optimize mul_mat for small values of N #10991

Merged
merged 1 commit into from
Dec 30, 2024

Conversation

jeffbolznv
Copy link
Collaborator

This is what I have in mind to fix #10966. Currently Draft because it needs more perf testing, particularly to make sure that it doesn't regress perf when N==1.

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where the batch_strides are overloaded to hold the row strides. Put the loads from the B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.

Results on RTX 4070
llama-batched-bench -m Phi-3-mini-4k-instruct-q4.gguf -ngl 99 -npp 512 -ntg 128 -npl 1,2,4,8,16 -pps

before:
|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.186 |  2752.13 |    1.387 |    92.31 |    1.573 |   406.94 |
|   512 |    128 |    2 |    768 |    0.139 |  3682.69 |    5.796 |    44.17 |    5.935 |   129.40 |
|   512 |    128 |    4 |   1024 |    0.147 |  3476.28 |    5.901 |    86.77 |    6.048 |   169.31 |
|   512 |    128 |    8 |   1536 |    0.142 |  3617.89 |    6.309 |   162.30 |    6.451 |   238.10 |
|   512 |    128 |   16 |   2560 |    0.142 |  3608.86 |    7.470 |   274.17 |    7.612 |   336.32 |

after:
|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.211 |  2431.24 |    1.411 |    90.68 |    1.622 |   394.55 |
|   512 |    128 |    2 |    768 |    0.139 |  3686.18 |    1.695 |   151.04 |    1.834 |   418.81 |
|   512 |    128 |    4 |   1024 |    0.140 |  3658.53 |    1.950 |   262.52 |    2.090 |   489.90 |
|   512 |    128 |    8 |   1536 |    0.148 |  3469.54 |    6.253 |   163.76 |    6.401 |   239.98 |
|   512 |    128 |   16 |   2560 |    0.149 |  3433.38 |    7.433 |   275.54 |    7.582 |   337.65 |

I'll put directed perf tests in a separate comment.

@jeffbolznv jeffbolznv requested a review from 0cc4m December 26, 2024 22:30
@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Dec 26, 2024
@jeffbolznv
Copy link
Collaborator Author

Results from test-backend-ops perf -o MUL_MAT

before (with coopmat2 enabled):
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   492.03 us/run - 117.44 MFLOP/run - 238.69 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   251.12 us/run - 117.44 MFLOP/run - 467.67 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.22 us/run - 117.44 MFLOP/run -   2.60 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    60.39 us/run - 117.44 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.27 us/run - 117.44 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    99.25 us/run - 117.44 MFLOP/run -   1.18 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.78 us/run - 117.44 MFLOP/run - 871.34 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    54.35 us/run - 117.44 MFLOP/run -   2.16 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.53 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.65 us/run - 117.44 MFLOP/run -   2.32 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    74.10 us/run - 117.44 MFLOP/run -   1.58 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.20 us/run - 117.44 MFLOP/run -   1.10 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                11928 runs -    85.06 us/run - 117.44 MFLOP/run -   1.38 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   505.11 us/run - 234.88 MFLOP/run - 465.01 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   447.88 us/run - 234.88 MFLOP/run - 524.43 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   435.22 us/run - 234.88 MFLOP/run - 539.68 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   446.16 us/run - 234.88 MFLOP/run - 526.44 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   542.16 us/run - 234.88 MFLOP/run - 433.23 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   510.54 us/run - 234.88 MFLOP/run - 460.07 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   489.83 us/run - 234.88 MFLOP/run - 479.51 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   630.16 us/run - 234.88 MFLOP/run - 372.73 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   567.43 us/run - 234.88 MFLOP/run - 413.94 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   505.23 us/run - 234.88 MFLOP/run - 464.89 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   704.45 us/run - 234.88 MFLOP/run - 333.43 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   485.28 us/run - 234.88 MFLOP/run - 484.01 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1704 runs -   588.47 us/run - 234.88 MFLOP/run - 399.14 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1988 runs -   507.20 us/run - 352.32 MFLOP/run - 694.64 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2272 runs -   446.84 us/run - 352.32 MFLOP/run - 788.48 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   436.91 us/run - 352.32 MFLOP/run - 806.40 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   466.01 us/run - 352.32 MFLOP/run - 756.04 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   542.84 us/run - 352.32 MFLOP/run - 649.03 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   508.58 us/run - 352.32 MFLOP/run - 692.75 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   489.72 us/run - 352.32 MFLOP/run - 719.44 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   623.28 us/run - 352.32 MFLOP/run - 565.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   567.26 us/run - 352.32 MFLOP/run - 621.09 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   658.65 us/run - 352.32 MFLOP/run - 534.92 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   567.61 us/run - 352.32 MFLOP/run - 620.71 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   486.81 us/run - 352.32 MFLOP/run - 723.74 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1704 runs -   595.93 us/run - 352.32 MFLOP/run - 591.21 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   509.75 us/run - 469.76 MFLOP/run - 921.54 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2343 runs -   449.00 us/run - 469.76 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   437.28 us/run - 469.76 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   468.96 us/run - 469.76 MFLOP/run -   1.00 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   544.17 us/run - 469.76 MFLOP/run - 863.26 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   686.28 us/run - 469.76 MFLOP/run - 684.51 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   489.94 us/run - 469.76 MFLOP/run - 958.81 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   470.73 us/run - 469.76 MFLOP/run - 997.94 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.54 us/run - 469.76 MFLOP/run - 826.26 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   508.29 us/run - 469.76 MFLOP/run - 924.20 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.17 us/run - 469.76 MFLOP/run - 826.80 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   487.63 us/run - 469.76 MFLOP/run - 963.35 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1917 runs -   528.13 us/run - 469.76 MFLOP/run - 889.49 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2052 runs -   510.96 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2223 runs -   449.89 us/run - 587.20 MFLOP/run -   1.31 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2394 runs -   438.36 us/run - 587.20 MFLOP/run -   1.34 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   471.64 us/run - 587.20 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   546.42 us/run - 587.20 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   510.66 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   494.39 us/run - 587.20 MFLOP/run -   1.19 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   470.94 us/run - 587.20 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   570.14 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   622.60 us/run - 587.20 MFLOP/run - 943.14 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   674.18 us/run - 587.20 MFLOP/run - 870.99 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   489.73 us/run - 587.20 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2052 runs -   515.57 us/run - 587.20 MFLOP/run -   1.14 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   515.37 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2247 runs -   454.28 us/run - 939.52 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   440.24 us/run - 939.52 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   470.06 us/run - 939.52 MFLOP/run -   2.00 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1926 runs -   547.73 us/run - 939.52 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1498 runs -   668.86 us/run - 939.52 MFLOP/run -   1.40 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   493.21 us/run - 939.52 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1712 runs -   607.48 us/run - 939.52 MFLOP/run -   1.55 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   571.28 us/run - 939.52 MFLOP/run -   1.64 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1498 runs -   672.34 us/run - 939.52 MFLOP/run -   1.40 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1391 runs -   745.89 us/run - 939.52 MFLOP/run -   1.26 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   492.09 us/run - 939.52 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1926 runs -   539.10 us/run - 939.52 MFLOP/run -   1.74 TFLOPS

after:
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2556 runs -   492.15 us/run - 117.44 MFLOP/run - 238.63 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   251.55 us/run - 117.44 MFLOP/run - 466.87 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.77 us/run - 117.44 MFLOP/run -   2.57 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    63.76 us/run - 117.44 MFLOP/run -   1.84 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    80.85 us/run - 117.44 MFLOP/run -   1.45 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   117.77 us/run - 117.44 MFLOP/run - 997.23 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.44 us/run - 117.44 MFLOP/run - 873.56 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    53.62 us/run - 117.44 MFLOP/run -   2.19 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.70 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.27 us/run - 117.44 MFLOP/run -   2.34 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    95.54 us/run - 117.44 MFLOP/run -   1.23 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.41 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                11928 runs -    84.43 us/run - 117.44 MFLOP/run -   1.39 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   494.87 us/run - 234.88 MFLOP/run - 474.63 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4260 runs -   253.07 us/run - 234.88 MFLOP/run - 928.13 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    63.20 us/run - 234.88 MFLOP/run -   3.72 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    75.14 us/run - 234.88 MFLOP/run -   3.13 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.91 us/run - 234.88 MFLOP/run -   2.50 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.77 us/run - 234.88 MFLOP/run -   2.12 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   137.74 us/run - 234.88 MFLOP/run -   1.71 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    91.60 us/run - 234.88 MFLOP/run -   2.56 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   123.03 us/run - 234.88 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    88.40 us/run - 234.88 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.25 us/run - 234.88 MFLOP/run -   2.69 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.90 us/run - 234.88 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 9798 runs -   103.26 us/run - 234.88 MFLOP/run -   2.27 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2272 runs -   499.86 us/run - 352.32 MFLOP/run - 704.84 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    3976 runs -   258.54 us/run - 352.32 MFLOP/run -   1.36 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13064 runs -    78.11 us/run - 352.32 MFLOP/run -   4.51 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.23 us/run - 352.32 MFLOP/run -   2.95 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9088 runs -   111.03 us/run - 352.32 MFLOP/run -   3.17 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   164.92 us/run - 352.32 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7100 runs -   141.27 us/run - 352.32 MFLOP/run -   2.49 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8236 runs -   123.62 us/run - 352.32 MFLOP/run -   2.85 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.13 us/run - 352.32 MFLOP/run -   2.38 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    91.87 us/run - 352.32 MFLOP/run -   3.84 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   108.56 us/run - 352.32 MFLOP/run -   3.25 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   121.65 us/run - 352.32 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 7668 runs -   132.48 us/run - 352.32 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2130 runs -   501.44 us/run - 469.76 MFLOP/run - 936.83 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    4047 runs -   259.78 us/run - 469.76 MFLOP/run -   1.81 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    98.85 us/run - 469.76 MFLOP/run -   4.75 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   125.80 us/run - 469.76 MFLOP/run -   3.73 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7881 runs -   128.16 us/run - 469.76 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   157.07 us/run - 469.76 MFLOP/run -   2.99 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   172.95 us/run - 469.76 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8307 runs -   121.43 us/run - 469.76 MFLOP/run -   3.87 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   160.11 us/run - 469.76 MFLOP/run -   2.93 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9159 runs -   111.70 us/run - 469.76 MFLOP/run -   4.21 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   130.82 us/run - 469.76 MFLOP/run -   3.59 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7242 runs -   141.93 us/run - 469.76 MFLOP/run -   3.31 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 6390 runs -   158.66 us/run - 469.76 MFLOP/run -   2.96 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2052 runs -   510.92 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2223 runs -   457.94 us/run - 587.20 MFLOP/run -   1.28 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2394 runs -   438.06 us/run - 587.20 MFLOP/run -   1.34 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   554.66 us/run - 587.20 MFLOP/run -   1.06 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   544.65 us/run - 587.20 MFLOP/run -   1.08 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   511.42 us/run - 587.20 MFLOP/run -   1.15 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   492.77 us/run - 587.20 MFLOP/run -   1.19 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2223 runs -   472.01 us/run - 587.20 MFLOP/run -   1.24 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   572.29 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   671.28 us/run - 587.20 MFLOP/run - 874.75 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   570.02 us/run - 587.20 MFLOP/run -   1.03 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2052 runs -   489.23 us/run - 587.20 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1881 runs -   556.97 us/run - 587.20 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   515.25 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2247 runs -   463.40 us/run - 939.52 MFLOP/run -   2.03 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   440.64 us/run - 939.52 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   480.48 us/run - 939.52 MFLOP/run -   1.96 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1926 runs -   547.03 us/run - 939.52 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   514.89 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   495.21 us/run - 939.52 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   475.24 us/run - 939.52 MFLOP/run -   1.98 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   572.38 us/run - 939.52 MFLOP/run -   1.64 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   514.41 us/run - 939.52 MFLOP/run -   1.83 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1391 runs -   751.80 us/run - 939.52 MFLOP/run -   1.25 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   493.02 us/run - 939.52 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1712 runs -   599.61 us/run - 939.52 MFLOP/run -   1.57 TFLOPS

The "before" results with coopmat1 or no coopmat were worse (I can shared if somebody is interested, but probably more useful to benchmark another GPU instead).

Still thinking about where to put the cutoff for switching from mat_mul_vec to mat_mul. Seems like 8 would still be better using mat_mul_vec, and it doesn't cost anything except a little bit of compile time. Let's collect data on some other systems before finalizing anything.

@jeffbolznv
Copy link
Collaborator Author

CC @netrunnereve, can you please help with some perf tests?

@jeffbolznv
Copy link
Collaborator Author

Results with mul_mat_vec_max_cols == 8:

|    PP |     TG |    B |   N_KV |   T_PP s | S_PP t/s |   T_TG s | S_TG t/s |      T s |    S t/s |
|-------|--------|------|--------|----------|----------|----------|----------|----------|----------|
|   512 |    128 |    1 |    640 |    0.184 |  2777.75 |    1.406 |    91.03 |    1.590 |   402.41 |
|   512 |    128 |    2 |    768 |    0.144 |  3554.54 |    1.691 |   151.36 |    1.835 |   418.43 |
|   512 |    128 |    4 |   1024 |    0.140 |  3655.89 |    1.978 |   258.90 |    2.118 |   483.56 |
|   512 |    128 |    8 |   1536 |    0.147 |  3484.46 |    3.163 |   323.70 |    3.310 |   464.00 |
|   512 |    128 |   16 |   2560 |    0.149 |  3427.04 |    7.199 |   284.49 |    7.348 |   348.38 |

  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8208 runs -   122.57 us/run - 587.20 MFLOP/run -   4.79 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   198.44 us/run - 587.20 MFLOP/run -   2.96 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6498 runs -   154.93 us/run - 587.20 MFLOP/run -   3.79 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   192.70 us/run - 587.20 MFLOP/run -   3.05 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   208.25 us/run - 587.20 MFLOP/run -   2.82 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7011 runs -   144.74 us/run - 587.20 MFLOP/run -   4.06 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   201.45 us/run - 587.20 MFLOP/run -   2.91 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   164.43 us/run - 587.20 MFLOP/run -   3.57 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5472 runs -   183.34 us/run - 587.20 MFLOP/run -   3.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6327 runs -   159.89 us/run - 587.20 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 6327 runs -   160.11 us/run - 587.20 MFLOP/run -   3.67 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    2033 runs -   508.04 us/run - 939.52 MFLOP/run -   1.85 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    3531 runs -   284.00 us/run - 939.52 MFLOP/run -   3.31 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5350 runs -   189.77 us/run - 939.52 MFLOP/run -   4.95 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   302.83 us/run - 939.52 MFLOP/run -   3.10 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   221.13 us/run - 939.52 MFLOP/run -   4.25 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3424 runs -   292.64 us/run - 939.52 MFLOP/run -   3.21 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3210 runs -   313.45 us/run - 939.52 MFLOP/run -   3.00 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   219.81 us/run - 939.52 MFLOP/run -   4.27 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   308.18 us/run - 939.52 MFLOP/run -   3.05 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5029 runs -   202.19 us/run - 939.52 MFLOP/run -   4.65 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4708 runs -   216.55 us/run - 939.52 MFLOP/run -   4.34 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   218.30 us/run - 939.52 MFLOP/run -   4.30 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 4173 runs -   242.99 us/run - 939.52 MFLOP/run -   3.87 TFLOPS

@netrunnereve
Copy link
Collaborator

CC @netrunnereve, can you please help with some perf tests?

Here are the numbers on my RX 470, it's much faster with small ns compared to master. My card prefers a max cols of 8 or maybe something even larger.

Master:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1216.30 us/run - 117.44 MFLOP/run -  96.56 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   648.42 us/run - 117.44 MFLOP/run - 181.12 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   219.79 us/run - 117.44 MFLOP/run - 534.32 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   263.19 us/run - 117.44 MFLOP/run - 446.21 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   315.26 us/run - 117.44 MFLOP/run - 372.52 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   343.19 us/run - 117.44 MFLOP/run - 342.21 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   341.83 us/run - 117.44 MFLOP/run - 343.57 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   240.04 us/run - 117.44 MFLOP/run - 489.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   444.23 us/run - 117.44 MFLOP/run - 264.37 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   236.60 us/run - 117.44 MFLOP/run - 496.38 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   311.48 us/run - 117.44 MFLOP/run - 377.04 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   365.34 us/run - 117.44 MFLOP/run - 321.46 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 5112 runs -   225.15 us/run - 117.44 MFLOP/run - 521.60 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     426 runs - 34594.86 us/run - 234.88 MFLOP/run -   6.79 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     426 runs -  6174.27 us/run - 234.88 MFLOP/run -  38.04 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3062.32 us/run - 234.88 MFLOP/run -  76.70 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  2869.80 us/run - 234.88 MFLOP/run -  81.85 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3619.29 us/run - 234.88 MFLOP/run -  64.90 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  2944.36 us/run - 234.88 MFLOP/run -  79.77 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3155.41 us/run - 234.88 MFLOP/run -  74.44 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3600.20 us/run - 234.88 MFLOP/run -  65.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  5398.02 us/run - 234.88 MFLOP/run -  43.51 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3558.89 us/run - 234.88 MFLOP/run -  66.00 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3923.29 us/run - 234.88 MFLOP/run -  59.87 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3643.44 us/run - 234.88 MFLOP/run -  64.47 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  426 runs -  3137.46 us/run - 234.88 MFLOP/run -  74.86 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     284 runs - 35506.70 us/run - 352.32 MFLOP/run -   9.92 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     284 runs -  6184.04 us/run - 352.32 MFLOP/run -  56.97 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3336.13 us/run - 352.32 MFLOP/run - 105.61 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3206.07 us/run - 352.32 MFLOP/run - 109.89 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4161.13 us/run - 352.32 MFLOP/run -  84.67 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  3721.77 us/run - 352.32 MFLOP/run -  94.67 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    568 runs -  3492.59 us/run - 352.32 MFLOP/run - 100.88 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  3908.29 us/run - 352.32 MFLOP/run -  90.15 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  5905.91 us/run - 352.32 MFLOP/run -  59.66 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4338.64 us/run - 352.32 MFLOP/run -  81.21 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4336.33 us/run - 352.32 MFLOP/run -  81.25 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    284 runs -  4010.72 us/run - 352.32 MFLOP/run -  87.84 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  568 runs -  3470.56 us/run - 352.32 MFLOP/run - 101.52 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     213 runs - 36834.47 us/run - 469.76 MFLOP/run -  12.75 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     213 runs -  6144.24 us/run - 469.76 MFLOP/run -  76.46 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3314.64 us/run - 469.76 MFLOP/run - 141.72 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3188.94 us/run - 469.76 MFLOP/run - 147.31 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4113.36 us/run - 469.76 MFLOP/run - 114.20 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3682.00 us/run - 469.76 MFLOP/run - 127.58 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3454.99 us/run - 469.76 MFLOP/run - 135.97 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  3915.42 us/run - 469.76 MFLOP/run - 119.98 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    213 runs -  5907.96 us/run - 469.76 MFLOP/run -  79.51 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4337.69 us/run - 469.76 MFLOP/run - 108.30 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4282.27 us/run - 469.76 MFLOP/run - 109.70 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    426 runs -  4013.73 us/run - 469.76 MFLOP/run - 117.04 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  426 runs -  3428.75 us/run - 469.76 MFLOP/run - 137.01 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs - 63726.35 us/run - 587.20 MFLOP/run -   9.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs -  7152.25 us/run - 587.20 MFLOP/run -  82.10 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4195.32 us/run - 587.20 MFLOP/run - 139.97 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3472.49 us/run - 587.20 MFLOP/run - 169.10 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5101.41 us/run - 587.20 MFLOP/run - 115.11 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5789.15 us/run - 587.20 MFLOP/run - 101.43 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3670.29 us/run - 587.20 MFLOP/run - 159.99 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4199.34 us/run - 587.20 MFLOP/run - 139.83 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    171 runs -  6215.60 us/run - 587.20 MFLOP/run -  94.47 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4672.93 us/run - 587.20 MFLOP/run - 125.66 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5186.44 us/run - 587.20 MFLOP/run - 113.22 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4256.72 us/run - 587.20 MFLOP/run - 137.95 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  342 runs -  4293.20 us/run - 587.20 MFLOP/run - 136.78 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     107 runs - 63861.16 us/run - 939.52 MFLOP/run -  14.71 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     214 runs -  7238.21 us/run - 939.52 MFLOP/run - 129.80 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4469.99 us/run - 939.52 MFLOP/run - 210.18 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3718.92 us/run - 939.52 MFLOP/run - 252.63 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5386.27 us/run - 939.52 MFLOP/run - 174.43 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6098.87 us/run - 939.52 MFLOP/run - 154.05 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3819.89 us/run - 939.52 MFLOP/run - 245.96 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4489.57 us/run - 939.52 MFLOP/run - 209.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6502.73 us/run - 939.52 MFLOP/run - 144.48 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  4957.55 us/run - 939.52 MFLOP/run - 189.51 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5439.68 us/run - 939.52 MFLOP/run - 172.72 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4535.69 us/run - 939.52 MFLOP/run - 207.14 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  321 runs -  4558.73 us/run - 939.52 MFLOP/run - 206.09 GFLOPS

PR:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1217.83 us/run - 117.44 MFLOP/run -  96.43 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   648.67 us/run - 117.44 MFLOP/run - 181.05 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   220.52 us/run - 117.44 MFLOP/run - 532.56 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   260.91 us/run - 117.44 MFLOP/run - 450.11 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   317.34 us/run - 117.44 MFLOP/run - 370.07 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   342.40 us/run - 117.44 MFLOP/run - 342.99 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   341.18 us/run - 117.44 MFLOP/run - 344.22 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   239.85 us/run - 117.44 MFLOP/run - 489.63 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   446.84 us/run - 117.44 MFLOP/run - 262.83 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   234.25 us/run - 117.44 MFLOP/run - 501.35 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   313.08 us/run - 117.44 MFLOP/run - 375.12 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   364.63 us/run - 117.44 MFLOP/run - 322.08 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 5112 runs -   225.88 us/run - 117.44 MFLOP/run - 519.93 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1229.47 us/run - 234.88 MFLOP/run - 191.04 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1704 runs -   719.99 us/run - 234.88 MFLOP/run - 326.23 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3834 runs -   286.27 us/run - 234.88 MFLOP/run - 820.49 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.11 us/run - 234.88 MFLOP/run - 639.81 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   391.70 us/run - 234.88 MFLOP/run - 599.64 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   447.79 us/run - 234.88 MFLOP/run - 524.54 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   447.22 us/run - 234.88 MFLOP/run - 525.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   335.05 us/run - 234.88 MFLOP/run - 701.02 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   542.40 us/run - 234.88 MFLOP/run - 433.04 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   332.25 us/run - 234.88 MFLOP/run - 706.95 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   411.88 us/run - 234.88 MFLOP/run - 570.26 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   427.64 us/run - 234.88 MFLOP/run - 549.25 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 3408 runs -   295.02 us/run - 234.88 MFLOP/run - 796.15 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1215.49 us/run - 352.32 MFLOP/run - 289.86 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1420 runs -   817.28 us/run - 352.32 MFLOP/run - 431.09 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2840 runs -   378.74 us/run - 352.32 MFLOP/run - 930.24 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   473.56 us/run - 352.32 MFLOP/run - 743.99 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   461.17 us/run - 352.32 MFLOP/run - 763.97 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   554.53 us/run - 352.32 MFLOP/run - 635.36 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   590.02 us/run - 352.32 MFLOP/run - 597.14 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   449.88 us/run - 352.32 MFLOP/run - 783.15 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   631.23 us/run - 352.32 MFLOP/run - 558.15 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   445.85 us/run - 352.32 MFLOP/run - 790.22 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   500.70 us/run - 352.32 MFLOP/run - 703.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   529.47 us/run - 352.32 MFLOP/run - 665.43 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2840 runs -   388.79 us/run - 352.32 MFLOP/run - 906.19 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     852 runs -  1235.53 us/run - 469.76 MFLOP/run - 380.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1065 runs -   939.33 us/run - 469.76 MFLOP/run - 500.10 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   459.59 us/run - 469.76 MFLOP/run -   1.02 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   567.11 us/run - 469.76 MFLOP/run - 828.35 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   514.35 us/run - 469.76 MFLOP/run - 913.30 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   664.07 us/run - 469.76 MFLOP/run - 707.40 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   839.34 us/run - 469.76 MFLOP/run - 559.68 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   560.88 us/run - 469.76 MFLOP/run - 837.54 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   882.50 us/run - 469.76 MFLOP/run - 532.31 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   568.51 us/run - 469.76 MFLOP/run - 826.31 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   657.32 us/run - 469.76 MFLOP/run - 714.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   619.77 us/run - 469.76 MFLOP/run - 757.96 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 2343 runs -   470.00 us/run - 469.76 MFLOP/run - 999.50 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs - 63729.99 us/run - 587.20 MFLOP/run -   9.21 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     171 runs -  7168.01 us/run - 587.20 MFLOP/run -  81.92 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4195.74 us/run - 587.20 MFLOP/run - 139.95 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3476.33 us/run - 587.20 MFLOP/run - 168.91 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5097.87 us/run - 587.20 MFLOP/run - 115.19 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5788.51 us/run - 587.20 MFLOP/run - 101.44 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  3678.56 us/run - 587.20 MFLOP/run - 159.63 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4203.57 us/run - 587.20 MFLOP/run - 139.69 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    171 runs -  6215.48 us/run - 587.20 MFLOP/run -  94.47 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4675.67 us/run - 587.20 MFLOP/run - 125.59 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  5188.15 us/run - 587.20 MFLOP/run - 113.18 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    342 runs -  4257.13 us/run - 587.20 MFLOP/run - 137.93 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  342 runs -  4294.46 us/run - 587.20 MFLOP/run - 136.73 GFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     107 runs - 63876.04 us/run - 939.52 MFLOP/run -  14.71 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     214 runs -  7251.70 us/run - 939.52 MFLOP/run - 129.56 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4471.63 us/run - 939.52 MFLOP/run - 210.11 GFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3718.53 us/run - 939.52 MFLOP/run - 252.66 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5383.71 us/run - 939.52 MFLOP/run - 174.51 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6097.81 us/run - 939.52 MFLOP/run - 154.08 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3819.02 us/run - 939.52 MFLOP/run - 246.01 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4489.68 us/run - 939.52 MFLOP/run - 209.26 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  6507.34 us/run - 939.52 MFLOP/run - 144.38 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  4957.38 us/run - 939.52 MFLOP/run - 189.52 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    214 runs -  5441.24 us/run - 939.52 MFLOP/run - 172.67 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  4533.26 us/run - 939.52 MFLOP/run - 207.25 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  321 runs -  4559.45 us/run - 939.52 MFLOP/run - 206.06 GFLOPS

max cols of 8:

  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     855 runs -  1311.54 us/run - 587.20 MFLOP/run - 447.72 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    1026 runs -  1087.28 us/run - 587.20 MFLOP/run - 540.07 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1881 runs -   552.73 us/run - 587.20 MFLOP/run -   1.06 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   665.14 us/run - 587.20 MFLOP/run - 882.83 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   601.74 us/run - 587.20 MFLOP/run - 975.85 GFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.90 us/run - 587.20 MFLOP/run - 766.68 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1153.59 us/run - 587.20 MFLOP/run - 509.02 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   720.47 us/run - 587.20 MFLOP/run - 815.03 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1006.35 us/run - 587.20 MFLOP/run - 583.50 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   711.60 us/run - 587.20 MFLOP/run - 825.18 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   764.91 us/run - 587.20 MFLOP/run - 767.68 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.83 us/run - 587.20 MFLOP/run - 766.76 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1881 runs -   573.36 us/run - 587.20 MFLOP/run -   1.02 TFLOPS
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     642 runs -  1672.57 us/run - 939.52 MFLOP/run - 561.73 GFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                     749 runs -  1454.35 us/run - 939.52 MFLOP/run - 646.01 GFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1284 runs -   834.69 us/run - 939.52 MFLOP/run -   1.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1180.97 us/run - 939.52 MFLOP/run - 795.55 GFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1177 runs -   862.64 us/run - 939.52 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    749 runs -  1387.07 us/run - 939.52 MFLOP/run - 677.35 GFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    749 runs -  1511.23 us/run - 939.52 MFLOP/run - 621.69 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1060.27 us/run - 939.52 MFLOP/run - 886.12 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1257.63 us/run - 939.52 MFLOP/run - 747.06 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1067.26 us/run - 939.52 MFLOP/run - 880.32 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1091.97 us/run - 939.52 MFLOP/run - 860.39 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1069.41 us/run - 939.52 MFLOP/run - 878.54 GFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 1177 runs -   866.70 us/run - 939.52 MFLOP/run -   1.08 TFLOPS

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Giving my results with a 7900XTX running radv:

This PR:

main: n_kv_max = 4096, n_batch = 2048, n_ubatch = 512, flash_attn = 0, is_pp_shared = 1, n_gpu_layers = 99, n_threads = 12, n_threads_batch = 12

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.590 322.06 3.899 32.83 5.489 116.60
512 128 2 768 1.567 326.75 5.118 50.02 6.684 114.89
512 128 4 1024 1.578 324.52 7.198 71.13 8.776 116.68
512 128 8 1536 1.579 324.20 37.659 27.19 39.238 39.15
512 128 16 2560 1.584 323.15 28.294 72.38 29.879 85.68

Master:

main: n_kv_max = 4096, n_batch = 2048, n_ubatch = 512, flash_attn = 0, is_pp_shared = 1, n_gpu_layers = 99, n_threads = 12, n_threads_batch = 12

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.578 324.39 3.838 33.35 5.416 118.17
512 128 2 768 1.555 329.33 31.047 8.25 32.602 23.56
512 128 4 1024 1.570 326.11 33.209 15.42 34.779 29.44
512 128 8 1536 1.571 325.94 37.241 27.50 38.812 39.58
512 128 16 2560 1.575 325.05 28.106 72.87 29.681 86.25

Conclusion:

  1. Very minor regression at the N=1 case, but given the speedup at other sizes probably worth it. Unless we can keep the N=1 case the same as it is right now perhaps?
  2. Absolutely massive boost at N=2 and N=4. I am actually seeing very good speedups at those batchsizes instead of the massive performance dropoff before.
  3. N=8 and N=16 seem unchanged. Is there any chance we can use the same logic for these batch sizes? Given the fact N=4 is faster than N=8, it probably makes sense to use this logic at larger batch sizes as well. At least speaking for the 7900XTX.

Let me know if you want any additional tests at different batch sizes. Thanks for making this PR!

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
@jeffbolznv jeffbolznv changed the title draft: vulkan: optimize mul_mat for small values of N vulkan: optimize mul_mat for small values of N Dec 28, 2024
@jeffbolznv
Copy link
Collaborator Author

I didn't see a perf regression for N==1. I've updated the limit to 8, and removed "draft".

@jeffbolznv
Copy link
Collaborator Author

Thanks @Mushoz . I've updated the limit to 8. Feel free to try 16, but I suspect the mat-mat mul path would work better for 16, at least if we tuned the matrix sizes (the current set of three sizes may be limiting...).

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Token generation is looking good at batch size 8 as well now!

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.593 321.35 3.899 32.83 5.493 116.52
512 128 2 768 1.569 326.24 5.125 49.96 6.694 114.73
512 128 4 1024 1.572 325.74 7.211 71.00 8.783 116.59
512 128 8 1536 1.585 323.12 11.803 86.75 13.388 114.73
512 128 16 2560 1.582 323.66 28.380 72.16 29.962 85.44

Going to try and see if a limit of 16 makes more sense. As N=8 is now outperforming N=16/

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

I didn't see a perf regression for N==1

What did you mean with this btw? I can clearly see a 0.5 token/sec drop on my N=1 result on this branch vs the master branch. I think that's outside the margin of error?

@jeffbolznv
Copy link
Collaborator Author

I meant in my own local testing. Is this outside the margin of error for you?

@Mushoz
Copy link

Mushoz commented Dec 28, 2024

Limit at 16:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.596 320.78 3.896 32.85 5.492 116.53
512 128 2 768 1.568 326.60 5.129 49.91 6.697 114.68
512 128 4 1024 1.575 325.00 7.209 71.02 8.785 116.57
512 128 8 1536 1.581 323.78 11.813 86.68 13.394 114.67
512 128 16 2560 1.589 322.18 71.415 28.68 73.005 35.07

So seems like 8 is indeed the sweet spot

@jeffbolznv
Copy link
Collaborator Author

I'm surprised it's worse at 16. Maybe using too many registers? You could try changing rm_kq and rm_stdq to 1, it may not make sense to do multiple rows with such a large value of N.

@Mushoz
Copy link

Mushoz commented Dec 29, 2024

I'm surprised it's worse at 16

Just to double check: I merely increased mul_mat_vec_max_cols from 8 to 16. That was the change you wanted me to test, right?

You could try changing rm_kq and rm_stdq to 1

Any pointers what changes exactly I need to make? I am not very familiar with the llama.cpp codebase unfortunately.

@0cc4m
Copy link
Collaborator

0cc4m commented Dec 29, 2024

I ran the test-backend-ops perf benchmark on my devices for 1,2,3,4,5,8,16 and 32. Note that I set the limit to 16 to be able to see what difference it makes there. Looks good overall and I think 8 is a decent compromise between number of shaders to compile and performance.

The x-axis indices map to these tests:

 0: type_a=f32,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 1: type_a=f16,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 2: type_a=q4_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 3: type_a=q4_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 4: type_a=q5_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 5: type_a=q5_1,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 6: type_a=q8_0,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 7: type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 8: type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
 9: type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
10: type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
11: type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]
12: type_a=iq4_nl,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]

mmv_small_n_rtx3090
mmv_small_n_rx6800xt
mmv_small_n_radeonvii
mmv_small_n_a770

@jeffbolznv
Copy link
Collaborator Author

You could try changing rm_kq and rm_stdq to 1

Any pointers what changes exactly I need to make?

Just set these values to 1 at around line 1861 in ggml-vulkan.cpp.

@Mushoz
Copy link

Mushoz commented Dec 29, 2024

Slighty more detailing comparison on my 7900XTX:

Master:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.598 320.39 3.922 32.63 5.520 115.94
512 128 2 768 1.577 324.76 31.832 8.04 33.409 22.99
512 128 3 896 1.594 321.26 33.806 11.36 35.400 25.31
512 128 4 1024 1.586 322.85 33.774 15.16 35.359 28.96
512 128 5 1152 1.586 322.83 35.551 18.00 37.137 31.02
512 128 6 1280 1.584 323.25 35.622 21.56 37.206 34.40
512 128 7 1408 1.582 323.58 37.415 23.95 38.998 36.10
512 128 8 1536 1.586 322.89 37.560 27.26 39.145 39.24
512 128 9 1664 1.584 323.22 27.789 41.45 29.373 56.65
512 128 10 1792 1.583 323.53 27.849 45.96 29.432 60.89
512 128 11 1920 1.581 323.94 27.928 50.41 29.509 65.07
512 128 12 2048 1.583 323.39 27.987 54.88 29.570 69.26
512 128 13 2176 1.579 324.24 28.080 59.26 29.659 73.37
512 128 14 2304 1.581 323.87 28.159 63.64 29.740 77.47
512 128 15 2432 1.583 323.36 28.255 67.95 29.839 81.51
512 128 16 2560 1.582 323.65 28.287 72.40 29.869 85.71

This PR (limit set to 16):

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.591 321.74 3.893 32.88 5.485 116.69
512 128 2 768 1.558 328.60 5.104 50.15 6.663 115.27
512 128 3 896 1.562 327.81 6.254 61.40 7.816 114.63
512 128 4 1024 1.567 326.74 7.161 71.49 8.728 117.32
512 128 5 1152 1.570 326.13 8.663 73.87 10.233 112.57
512 128 6 1280 1.568 326.50 9.596 80.03 11.164 114.65
512 128 7 1408 1.573 325.48 10.844 82.63 12.417 113.39
512 128 8 1536 1.572 325.66 11.739 87.23 13.311 115.39
512 128 9 1664 1.572 325.76 12.732 90.48 14.304 116.33
512 128 10 1792 1.577 324.60 14.082 90.90 15.659 114.44
512 128 11 1920 1.574 325.39 15.109 93.19 16.682 115.09
512 128 12 2048 1.578 324.44 16.934 90.71 18.512 110.63
512 128 13 2176 1.577 324.67 24.333 68.38 25.910 83.98
512 128 14 2304 1.581 323.88 46.061 38.90 47.642 48.36
512 128 15 2432 1.578 324.56 59.324 32.36 60.902 39.93
512 128 16 2560 1.575 325.17 70.868 28.90 72.443 35.34

Conclusions:

  1. I do not see a N=1 regression. My earlier master result was higher because I had used an older master build. So it might have regressed somewhere, but it wasn't caused by this PR.
  2. Peak token generation performance is obtained at N=11, but even at N=12 performance is still high and much higher than master.
  3. While there is a big drop-off at N=13, it's still performing better than master.
  4. It's only at N >= 14 where performance is below master.
  5. Specific for my 7900 XTX, a mul_mat_vec_max_cols of 12 is probably ideal, but it's not a clean power of 2 value unfortunately.

Interesting master ROCM comparison (without FA):

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 0.670 763.61 4.899 26.13 5.569 114.91
512 128 2 768 0.671 762.84 6.125 41.80 6.796 113.01
512 128 3 896 0.679 753.84 7.486 51.29 8.166 109.73
512 128 4 1024 0.677 756.66 8.885 57.62 9.562 107.09
512 128 5 1152 0.686 745.88 10.358 61.79 11.044 104.31
512 128 6 1280 0.677 756.27 11.944 64.30 12.621 101.42
512 128 7 1408 0.684 748.71 14.093 63.58 14.777 95.28
512 128 8 1536 0.681 751.67 15.630 65.52 16.311 94.17
512 128 9 1664 0.689 742.72 10.546 109.23 11.236 148.10
512 128 10 1792 0.691 741.07 10.614 120.60 11.305 158.52
512 128 11 1920 0.687 745.44 10.677 131.87 11.364 168.96
512 128 12 2048 0.684 748.16 10.791 142.34 11.475 178.47
512 128 13 2176 0.684 748.35 10.798 154.11 11.482 189.52
512 128 14 2304 0.683 749.86 10.850 165.16 11.533 199.77
512 128 15 2432 0.683 749.46 10.932 175.63 11.615 209.38
512 128 16 2560 0.690 742.16 10.980 186.52 11.670 219.37

Conclusions:

  1. The Vulkan implementation is MUCH faster from batch sizes 1 through 8. It's funny and sad that a generic Vulkan implementation is outperforming AMD's dedicated stack.
  2. ROCM is obtaining a BIG performance increase at N=9 and scales way better than Vulkan. I am assuming this is their matrix multiplication code path. It's outside the scope of this PR, as this focuses on the Matrix-Vector multiplication instead, but it does suggest there is a lot of room for optimization on the Matrix multiplication code path for Vulkan as well.

I will now make the suggested changes and re-run batch sizes 1 through 16 to see if setting those values to 1 is going to make any difference.

@Mushoz
Copy link

Mushoz commented Dec 29, 2024

Just set these values to 1 at around line 1861 in ggml-vulkan.cpp.

Damn, I am stupid. I didn't find those variables because I was looking in the diff instead of the actual file. I was able to run the benchmarks now:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.587 322.61 3.833 33.40 5.420 118.08
512 128 2 768 1.566 327.01 5.527 46.31 7.093 108.27
512 128 3 896 1.575 325.08 7.022 54.68 8.597 104.22
512 128 4 1024 1.580 324.03 8.393 61.00 9.973 102.68
512 128 5 1152 1.582 323.70 9.904 64.62 11.485 100.30
512 128 6 1280 1.586 322.86 11.499 66.79 13.085 97.83
512 128 7 1408 1.585 323.02 13.361 67.06 14.946 94.21
512 128 8 1536 1.580 324.10 14.913 68.66 16.493 93.13
512 128 9 1664 1.576 324.79 16.812 68.52 18.389 90.49
512 128 10 1792 1.579 324.30 18.379 69.64 19.958 89.79
512 128 11 1920 1.576 324.79 20.195 69.72 21.772 88.19
512 128 12 2048 1.578 324.39 21.864 70.25 23.443 87.36
512 128 13 2176 1.579 324.32 24.115 69.00 25.694 84.69
512 128 14 2304 1.580 323.96 25.822 69.40 27.402 84.08
512 128 15 2432 1.583 323.48 27.718 69.27 29.301 83.00
512 128 16 2560 1.581 323.88 29.914 68.46 31.495 81.28

As you can see, the sharp performance drop-off at batch size 14, 15 and 16 is completely gone. Batchsize 13 performs very similar to the previous test. But for all batchsizes lower than 13, the performance is worse with this suggested change.

Ideally we set rm_kq and rm_stdq only at those batchsizes that benefit from it, but:

  1. I feel it's an ugly hack
  2. Even with this change, the performance is lower than batchsize 11 and 12 without these changes, so it doesn't make any sense to go with higher batch sizes anyway.

@0cc4m
Copy link
Collaborator

0cc4m commented Dec 29, 2024

The Vulkan implementation is MUCH faster from batch sizes 1 through 8. It's funny and sad that a generic Vulkan implementation is outperforming AMD's dedicated stack.

Vulkan, ROCm and CUDA are all just APIs. Vulkan has a different focus, but it's also very low-level and (apart from being less convenient to use for compute-only programs) isn't inherently worse. Most relevant is the device code, not necessarily the API it's written in. But of course there are some limitations to Vulkan that the compute APIs don't have.

Ideally we set rm_kq and rm_stdq only at those batchsizes that benefit from it, but:

I feel it's an ugly hack

This kind of tuning is very common for GPUs, it's why libraries like cuBLAS are huge. They contain tons of specific kernels and the heuristics to pick them in an optimal way for different problem sizes and device capabilities.

At some point we'll probably need to implement an auto-tuner to be able to keep up with the number of hardware configurations and tuning parameters in the Vulkan backend. It's already quite a lot.

@Mushoz
Copy link

Mushoz commented Dec 29, 2024

Most relevant is the device code, not necessarily the API it's written in.

This is kinda going offtopic, so please let me know if I should move this conversation elsewhere, but does that mean ROCM should be able to get similar performance at batch sizes 1 through 8 (especially N=1 is severely lacking to be honest) with optimization within llama.cpp itself? Or did I misunderstand you?

@0cc4m
Copy link
Collaborator

0cc4m commented Dec 29, 2024

Most relevant is the device code, not necessarily the API it's written in.

This is kinda going offtopic, so please let me know if I should move this conversation elsewhere, but does that mean ROCM should be able to get similar performance at batch sizes 1 through 8 (especially N=1 is severely lacking to be honest) with optimization within llama.cpp itself? Or did I misunderstand you?

Yeah, the ROCm backend is basically using the CUDA code. It's mostly tuned for Nvidia, so AMD performance is not optimal. But so far there is no developer willing to put in the time to work on it.

You can see the code selecting different matmul (which is always the most relevant operation for performance) variants in ggml_cuda_mul_mat in ggml-cuda.cu. It's the equivalent of the ggml_vk_mul_mat function, which is more simple.

@0cc4m
Copy link
Collaborator

0cc4m commented Dec 29, 2024

I ran the batched bench with llama 8b q4_0 for my devices as well to gather some more data for tuning.

RTX 3090 Master:
PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 0.184 2777.25 1.551 82.55 1.735 368.90
512 128 2 768 0.176 2902.45 8.671 29.52 8.847 86.80
512 128 3 896 0.177 2896.01 8.886 43.21 9.063 98.86
512 128 4 1024 0.177 2885.71 8.983 56.99 9.161 111.78
512 128 5 1152 0.178 2875.31 9.073 70.54 9.251 124.52
512 128 6 1280 0.178 2877.76 9.167 83.78 9.345 136.97
512 128 7 1408 0.178 2875.86 10.085 88.84 10.263 137.19
512 128 8 1536 0.177 2886.69 10.270 99.71 10.447 147.03
512 128 9 1664 0.179 2863.98 6.768 170.22 6.946 239.55
512 128 10 1792 0.179 2858.96 6.843 187.04 7.022 255.18
512 128 11 1920 0.179 2859.54 6.945 202.74 7.124 269.52
512 128 12 2048 0.179 2854.13 7.023 218.71 7.202 284.35
512 128 13 2176 0.179 2853.05 7.166 232.20 7.346 296.22
512 128 14 2304 0.180 2847.83 7.242 247.46 7.421 310.46
512 128 15 2432 0.180 2843.01 7.330 261.93 7.510 323.82
512 128 16 2560 0.180 2838.14 7.394 276.98 7.574 337.98
512 128 17 2688 0.175 2923.24 7.575 287.24 7.751 346.81

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 0.184 2775.11 1.541 83.07 1.725 370.94
512 128 2 768 0.175 2922.84 1.874 136.58 2.049 374.73
512 128 3 896 0.176 2905.28 2.136 179.76 2.312 387.48
512 128 4 1024 0.176 2903.86 2.390 214.26 2.566 399.07
512 128 5 1152 0.175 2930.12 2.737 233.84 2.912 395.66
512 128 6 1280 0.177 2894.68 3.074 249.82 3.251 393.71
512 128 7 1408 0.178 2879.33 3.404 263.22 3.582 393.09
512 128 8 1536 0.179 2861.74 3.891 263.18 4.070 377.42
512 128 9 1664 0.179 2863.68 4.192 274.84 4.370 380.75
512 128 10 1792 0.180 2851.18 4.514 283.57 4.693 381.81
512 128 11 1920 0.180 2845.28 4.843 290.72 5.023 382.23
512 128 12 2048 0.181 2835.95 5.179 296.59 5.359 382.13
512 128 13 2176 0.180 2839.11 5.565 298.99 5.746 378.72
512 128 14 2304 0.182 2813.06 7.642 234.49 7.824 294.48
512 128 15 2432 0.182 2820.69 9.542 201.21 9.724 250.11
512 128 16 2560 0.182 2820.77 9.229 221.90 9.411 272.03
512 128 17 2688 0.182 2816.24 7.571 287.43 7.752 346.73
Radeon RX 6800 XT Master:
PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 0.629 813.80 1.909 67.06 2.538 252.19
512 128 2 768 0.614 833.76 15.228 16.81 15.842 48.48
512 128 3 896 0.614 834.33 16.127 23.81 16.740 53.52
512 128 4 1024 0.614 834.17 16.235 31.54 16.849 60.78
512 128 5 1152 0.615 832.45 16.774 38.15 17.389 66.25
512 128 6 1280 0.616 831.65 16.860 45.55 17.476 73.24
512 128 7 1408 0.616 830.56 17.741 50.50 18.357 76.70
512 128 8 1536 0.617 829.50 17.816 57.48 18.433 83.33
512 128 9 1664 0.617 829.50 12.526 91.97 13.143 126.61
512 128 10 1792 0.622 823.67 12.574 101.79 13.196 135.80
512 128 11 1920 0.619 826.59 12.622 111.55 13.241 145.00
512 128 12 2048 0.620 825.67 12.654 121.38 13.274 154.28
512 128 13 2176 0.620 825.36 12.734 130.67 13.355 162.94
512 128 14 2304 0.622 822.74 12.789 140.12 13.412 171.79
512 128 15 2432 0.622 823.25 12.829 149.66 13.451 180.80
512 128 16 2560 0.621 824.31 12.896 158.81 13.517 189.39
512 128 17 2688 0.640 800.54 13.041 166.86 13.681 196.48
512 128 18 2816 0.626 817.53 13.126 175.53 13.752 204.76
512 128 19 2944 0.624 820.74 13.244 183.62 13.868 212.28
512 128 20 3072 0.624 819.90 13.303 192.43 13.928 220.57
512 128 21 3200 0.626 818.32 13.372 201.01 13.998 228.60
512 128 22 3328 0.626 818.47 13.415 209.91 14.041 237.02
512 128 23 3456 0.626 817.59 13.472 218.52 14.099 245.13
512 128 24 3584 0.628 815.79 13.534 226.98 14.162 253.08
512 128 25 3712 0.625 818.75 13.598 235.32 14.224 260.97
512 128 26 3840 0.625 818.56 13.644 243.92 14.270 269.11
512 128 27 3968 0.626 818.48 13.688 252.49 14.313 277.22
512 128 28 4096 0.626 817.98 13.753 260.60 14.379 284.87

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 0.638 802.45 1.909 67.07 2.547 251.31
512 128 2 768 0.622 823.51 2.343 109.26 2.965 259.05
512 128 3 896 0.623 821.86 2.696 142.44 3.319 269.97
512 128 4 1024 0.628 815.81 3.004 170.42 3.632 281.94
512 128 5 1152 0.626 817.56 3.312 193.23 3.938 292.51
512 128 6 1280 0.627 816.57 3.891 197.35 4.518 283.28
512 128 7 1408 0.629 814.55 4.397 203.79 5.025 280.19
512 128 8 1536 0.628 815.18 4.481 228.50 5.110 300.61
512 128 9 1664 0.629 814.54 4.395 262.13 5.023 331.26
512 128 10 1792 0.629 814.38 5.112 250.39 5.741 312.15
512 128 11 1920 0.630 812.92 5.982 235.36 6.612 290.37
512 128 12 2048 0.631 811.70 5.691 269.91 6.322 323.97
512 128 13 2176 0.630 813.08 6.155 270.33 6.785 320.70
512 128 14 2304 0.630 812.58 6.604 271.34 7.234 318.48
512 128 15 2432 0.631 811.08 7.108 270.11 7.739 314.23
512 128 16 2560 0.630 812.34 7.615 268.94 8.245 310.48
512 128 17 2688 0.639 800.97 8.030 270.98 8.669 310.06
512 128 18 2816 0.628 815.18 8.810 261.51 9.438 298.36
512 128 19 2944 0.629 814.20 8.657 280.93 9.286 317.04
512 128 20 3072 0.631 811.73 9.151 279.74 9.782 314.04
512 128 21 3200 0.631 811.54 9.614 279.60 10.244 312.36
512 128 22 3328 0.632 810.56 10.089 279.11 10.721 310.42
512 128 23 3456 0.632 810.63 10.472 281.13 11.104 311.25
512 128 24 3584 0.631 810.95 10.849 283.16 11.480 312.18
512 128 25 3712 0.631 810.98 11.389 280.98 12.020 308.82
512 128 26 3840 0.632 810.37 11.800 282.04 12.431 308.89
512 128 27 3968 0.632 810.67 12.281 281.40 12.913 307.29
512 128 28 4096 0.632 810.24 12.803 279.95 13.434 304.89
Radeon Pro VII Master:
PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.741 294.01 2.442 52.43 4.183 153.00
512 128 2 768 1.693 302.49 40.048 6.39 41.741 18.40
512 128 3 896 1.703 300.70 42.925 8.95 44.628 20.08
512 128 4 1024 1.702 300.81 42.054 12.17 43.756 23.40
512 128 5 1152 1.701 301.07 45.862 13.96 47.562 24.22
512 128 6 1280 1.707 299.95 46.250 16.61 47.957 26.69
512 128 7 1408 1.710 299.35 49.029 18.27 50.740 27.75
512 128 8 1536 1.715 298.59 48.532 21.10 50.247 30.57
512 128 9 1664 1.721 297.58 32.824 35.10 34.544 48.17
512 128 10 1792 1.724 296.90 33.091 38.68 34.815 51.47
512 128 11 1920 1.730 295.98 33.209 42.40 34.939 54.95
512 128 12 2048 1.734 295.32 33.329 46.09 35.063 58.41
512 128 13 2176 1.737 294.69 33.596 49.53 35.333 61.59
512 128 14 2304 1.737 294.79 33.757 53.09 35.494 64.91
512 128 15 2432 1.738 294.60 33.891 56.65 35.629 68.26
512 128 16 2560 1.738 294.54 34.077 60.10 35.815 71.48
512 128 17 2688 1.760 290.98 33.704 64.56 35.463 75.80
512 128 18 2816 1.722 297.40 30.184 76.33 31.906 88.26
512 128 19 2944 1.681 304.53 29.792 81.63 31.474 93.54
512 128 20 3072 1.685 303.90 32.786 78.08 34.471 89.12
512 128 21 3200 1.692 302.67 34.458 78.01 36.150 88.52
512 128 22 3328 1.734 295.34 34.685 81.19 36.418 91.38
512 128 23 3456 1.741 294.08 35.187 83.67 36.928 93.59
512 128 24 3584 1.740 294.33 35.308 87.01 37.048 96.74
512 128 25 3712 1.748 292.91 35.382 90.44 37.130 99.97
512 128 26 3840 1.742 293.98 35.524 93.68 37.265 103.04

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 1.763 290.40 2.463 51.96 4.226 151.43
512 128 2 768 1.716 298.36 3.789 67.56 5.505 139.51
512 128 3 896 1.726 296.72 4.480 85.71 6.206 144.38
512 128 4 1024 1.724 296.93 5.594 91.53 7.318 139.93
512 128 5 1152 1.732 295.61 6.977 91.73 8.709 132.28
512 128 6 1280 1.730 295.93 8.196 93.70 9.926 128.95
512 128 7 1408 1.726 296.70 10.312 86.89 12.038 116.97
512 128 8 1536 1.721 297.56 10.221 100.19 11.941 128.63
512 128 9 1664 1.720 297.61 10.404 110.73 12.124 137.24
512 128 10 1792 1.729 296.11 11.296 113.31 13.025 137.58
512 128 11 1920 1.723 297.11 12.670 111.13 14.394 133.39
512 128 12 2048 1.727 296.49 14.170 108.40 15.897 128.83
512 128 13 2176 1.739 294.48 15.512 107.27 17.251 126.14
512 128 14 2304 1.731 295.71 16.882 106.15 18.613 123.78
512 128 15 2432 1.739 294.44 17.615 109.00 19.354 125.66
512 128 16 2560 1.731 295.71 18.534 110.50 20.266 126.32
512 128 17 2688 1.742 293.92 20.909 104.07 22.651 118.67
512 128 18 2816 1.710 299.43 22.160 103.97 23.870 117.97
512 128 19 2944 1.714 298.77 23.357 104.12 25.070 117.43
512 128 20 3072 1.716 298.34 24.178 105.88 25.894 118.64
512 128 21 3200 1.715 298.49 25.309 106.21 27.024 118.41
512 128 22 3328 1.717 298.20 33.204 84.81 34.921 95.30
512 128 23 3456 1.716 298.28 35.078 83.93 36.795 93.93
512 128 24 3584 1.727 296.51 36.639 83.85 38.366 93.42
512 128 25 3712 1.725 296.84 38.222 83.72 39.947 92.92
512 128 26 3840 1.728 296.24 39.838 83.54 41.567 92.38
512 128 27 3968 1.732 295.56 41.765 82.75 43.497 91.22
512 128 28 4096 1.746 293.32 42.828 83.68 44.574 91.89
Intel A770 Master:
PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 5.381 95.14 3.847 33.27 9.228 69.35
512 128 2 768 5.365 95.43 71.583 3.58 76.948 9.98
512 128 3 896 5.365 95.43 72.682 5.28 78.047 11.48
512 128 4 1024 5.387 95.05 73.944 6.92 79.331 12.91
512 128 5 1152 5.371 95.33 74.915 8.54 80.286 14.35
512 128 6 1280 5.370 95.35 76.088 10.09 81.458 15.71
512 128 7 1408 5.379 95.19 77.224 11.60 82.603 17.05
512 128 8 1536 5.378 95.19 78.374 13.07 83.752 18.34
512 128 9 1664 5.385 95.08 66.064 17.44 71.449 23.29
512 128 10 1792 5.374 95.28 66.214 19.33 71.588 25.03
512 128 11 1920 5.377 95.21 66.451 21.19 71.829 26.73
512 128 12 2048 5.385 95.07 66.647 23.05 72.033 28.43
512 128 13 2176 5.380 95.17 67.145 24.78 72.524 30.00
512 128 14 2304 5.382 95.14 67.395 26.59 72.777 31.66
512 128 15 2432 5.381 95.16 67.573 28.41 72.953 33.34
512 128 16 2560 5.376 95.23 67.757 30.23 73.133 35.00

PR:

PP TG B N_KV T_PP s S_PP t/s T_TG s S_TG t/s T s S t/s
512 128 1 640 5.404 94.74 3.941 32.48 9.346 68.48
512 128 2 768 5.382 95.13 6.033 42.43 11.415 67.28
512 128 3 896 5.390 94.99 6.407 59.94 11.797 75.95
512 128 4 1024 5.383 95.11 6.657 76.91 12.041 85.05
512 128 5 1152 5.387 95.05 8.374 76.43 13.760 83.72
512 128 6 1280 5.370 95.34 11.031 69.62 16.401 78.04
512 128 7 1408 5.369 95.37 28.588 31.34 33.957 41.46
512 128 8 1536 5.365 95.44 233.356 4.39 238.721 6.43
512 128 9 1664 5.384 95.10 212.586 5.42 217.969 7.63
512 128 10 1792 5.377 95.21 164.750 7.77 170.127 10.53
512 128 11 1920 5.372 95.31 567.201 2.48 572.572 3.35

(Performance got so low that I stopped the test)

It seems something around 13 is optimal for RTX 3090, around 22 for Radeon Pro VII and 7 for A770. On RX 6800 XT I reached the maximum batch size of 28 that the benchmark offered and still didn't reach the point where the matmul shader got more efficient.

Edit: But this heavily depends on quant complexity. With q4_0 the matrix vector shader gets to much larger n with good performance compared to q4_k_s, at least on AMD.

@jeffbolznv
Copy link
Collaborator Author

Interesting master ROCM comparison (without FA):

I don't know what exactly the batched-bench is measuring, but I noticed that the TG results are affected by the -npp option, so if you compare against ROCM then the better PP perf on ROCM may skew the TG results. It's interesting that Vulkan prompt processing is still half the performance of ROCm, even with cooperative matrix enabled. Maybe there's some additional tuning that could be done there.

As you can see, the sharp performance drop-off at batch size 14, 15 and 16 is completely gone.

Thanks, I think it's very likely that these cases were running out of registers when doing so many rows*cols.

I don't know much about how speculative decoding is used, how interesting are the n=9 to 16 cases? I think we should go with this PR as-is right now and we could always tune it further in the future.

@netrunnereve
Copy link
Collaborator

Even with the columns set to 16 and the rows set to 4 this actually doesn't use that many registers.

With Q4_0 and 64 subgroup size/4 rows/16 columns I'm getting for GCN 54/256 vector registers used, and 44 for Q8_0. For Q4_K and Q6_K it's in the 30 register range.

@0cc4m
Copy link
Collaborator

0cc4m commented Dec 30, 2024

Interesting master ROCM comparison (without FA):

I don't know what exactly the batched-bench is measuring, but I noticed that the TG results are affected by the -npp option, so if you compare against ROCM then the better PP perf on ROCM may skew the TG results. It's interesting that Vulkan prompt processing is still half the performance of ROCm, even with cooperative matrix enabled. Maybe there's some additional tuning that could be done there.

That might just be the prompt size affecting tg. Basically a larger kv cache means more calculations for each token, which slows down tg. But that should not be affected by pp speed.

There's definitely still a lot of room for tuning in the matrix multiplication shader, yes. If you have suggestions which directions I could investigate let me know.

@0cc4m 0cc4m merged commit 716bd6d into ggerganov:master Dec 30, 2024
48 checks passed
@jeffbolznv
Copy link
Collaborator Author

With Q4_0 and 64 subgroup size/4 rows/16 columns I'm getting for GCN 54/256 vector registers used, and 44 for Q8_0.

How can this be less than 64?

If you have suggestions which directions I could investigate let me know.

Getting the large tile size working (or understanding why it would be slow) is probably the first step. The medium tile size may not be large enough to avoid being bandwidth limited.

But it also occurred to me that this might be comparing an fp16 matmul in vulkan vs an int8 matmul in rocm. In which case it's less surprising to be slower.

@ggerganov
Copy link
Owner

Sharing my experience from the Metal backend in case it could be useful. Tuning the batch threshold between mat-vec and mat-mat can lead to some gains for small batches but keep in mind that there are 4 factors into play:

  • memory bandwidth of the device
  • compute capacity of the device
  • matrix sizes (i.e. model size)
  • data type (i.e. quantization)

Back when I first realized this for the Metal backend (#3524 (comment)) I was also thinking along the lines of auto-tuning the BS threshold per-device and per-model, but it seems very complicated to actually implement this in some reasonable manner.

Eventually, I believe I found a good solution in #10581. We now essentially have 3 types of matrix multiplication kernels in the Metal backend:

  • mat-vec (BS = 1)
    As we well know, these kernels are generally memory bandwidth bound. In the Metal backend, they operate at the scalar level.

  • mat-vec-ext (BS <= 8)
    These are similar to the basic mat-vec kernels, but make use of the vector data types float4 and float4x4 (depending on the quant group size) that are available in Metal.

  • mat-mat (BS > 8)
    These utilize the float8x8 simdgroup matrix data types for extra compute.

This results in universally good performance across a wide range of Apple devices and model sizes. There are still some small gains from manually tuning the BS thresholds per device and per model, but the default performance is overall good. I don't know if this is the best way to do it and it's still far from the theoretical linear scaling that we would ideally like to achieve at BS <= 8. Also not sure how applicable this approach is for the Vulkan backend - probably depends on what vector/matrix data types are available.

Pinging @JohannesGaessler in case he wants to give a short summary of what was done in the CUDA backend for small-batch sizes, since I believe the performance is quite good there.

@JohannesGaessler
Copy link
Collaborator

In the CUDA backend there are in essence three ways to do matrix multiplications:

  1. Dequantize the data to VRAM as FP16 and use cuBLAS GEMM. Easy to implement but needs very large batch sizes to make the overhead negligible. Also needs additional VRAM for the dequantized weight matrix.
  2. Quantize the activations to q8_1 and use simple dot products ("mul_mat_vec_q"). More work but still very manageable. There is no manual use of shared memory to cache any of the input data, so far I haven't been able to produce an implementation that is faster than just relying on automated caching. Used for batch sizes 1-8, after that register pressure kills performance. The __dp4a instruction is used for calculating the dot product, it may make sense to at some point look into an implementation that uses tensor cores. Template specializations by data type and batch size.
  3. Quantize the activations to q8_1, convert the weights to q8 on-the-fly, cache the inputs in shared memory tiles ("mul_mat_q"). A lot of work. Used for batch sizes > 8. Uses int8 tensor cores if available, __dp4a otherwise. Uses stream-k decomposition. Template specializations by data type and batch size.

On most NVIDIA GPUs MMVQ and MMQ are used by default for all batch sizes. On V100s or some AMD GPUs where int8 tensor cores aren't available MMQ is only used up to a batch size of 64.

For MMVQ I've found per-GPU tuning to not really be necessary since you're I/O-bound and to my knowledge it's possible to fully utilize I/O without fully utilizing all SMs. For MMQ I initially used one tile size per data type and GPU architecture but I've found that this is a bad approach. Currently the code precompiles template specializations with varying sizes in ne11 direction (i.e. in the direction of the batch size). At runtime the number of SMs is checked and the minimum tile size that is needed for the minimum number of waves is used. Especially for e.g. an RTX 3090 with 82 SMs I've found this approach to work well. The downside is the long compilation time and large binary size.

@jeffbolznv
Copy link
Collaborator Author

This PR is similar to 2, but the math is done at fp32. For Ada this still seems to be memory bandwidth limited.

@netrunnereve
Copy link
Collaborator

How can this be less than 64?

That's the maximum number of registers used per thread, so the entire subgroup would use 54*64=3456 registers total.

In the CUDA backend there are in essence three ways to do matrix multiplications:

Methods 2 and 3 need shaderIntegerDotProduct for the best performance, which I think GLSL doesn't support? If you're memory bound this might still be worth it though even if you have to rely on regular FMA instructions. Honestly if we plan on doing this for matrix matrix multiplications we might as well go all the way and have quantized activations for matrix vector inference like how it's done on the other backends.

@jeffbolznv
Copy link
Collaborator Author

How can this be less than 64?

That's the maximum number of registers used per thread, so the entire subgroup would use 54*64=3456 registers total.

How is it less than 64 per thread, since there are 4*16 accumulator values per thread? Unless the compiler is spilling them to memory, which would be surprising.

@netrunnereve
Copy link
Collaborator

You're right. At this point I have no idea where I got those numbers from (I probably loaded the wrong shader?) and I certainly can't reproduce them now 🤦‍♀️...

I ran the tools again and here are the hopefully correct numbers for Q4_0 with 64 subgroup size and 4 rows.

16 columns: 128 registers

16 columns with manual unrolling disabled in compute_outputs: 115 registers

32 columns: 184 registers

The register utilization in this case is high enough to reduce the number of subgroups that can be lined up in front of each core, but at least it's not overflowing and spilling to memory. RGA spits out a warning when there's spilling so the compiler shouldn't be hiding it.

@0cc4m
Copy link
Collaborator

0cc4m commented Jan 2, 2025

If you have suggestions which directions I could investigate let me know.

Getting the large tile size working (or understanding why it would be slow) is probably the first step. The medium tile size may not be large enough to avoid being bandwidth limited.

I saw no performance increase or even a performance drop when benchmarking the large tile size vs medium on AMD. I managed to get Radeon GPU Profiler to work, maybe that will give me a hint on why that is.

@ggerganov @JohannesGaessler Thank you for the summaries of how matrix multiplication is handled in Metal and CUDA.

Vulkan just has two kinds of shaders currently:

  • mul_mat_vec for batches <= 8
    • float32 precision, no shared memory caching
  • mul_mat_mm for general matrix multiply
    • Warptiling implementation with multiple cache layers. Dequantizes into shared memory, then uses either float16 or float32 for the calculations, depending on the precision required and hardware support. Uses float16 tensor cores if available.

It would be interesting to compare the implementations (especially with Metal) in a like-for-like scenario. With CUDA that's easy, but with Metal we'd have to find a GPU with similar hardware specs to Apple's.

Vulkan always has a little more difficulty since the hardware it runs on is not as uniform as it is for Metal and CUDA. AMD, Intel and Nvidia all have different architectures that offer different features and prefer different work sizes (not to even mention phones).

I think a good next step would be looking into q8_1 for the activations and int8 for the multiplications, for general matrix multiply. As @netrunnereve mentioned, DP4A is available to Vulkan as part of the VK_KHR_shader_integer_dot_product.html extension, but not directly usable from GLSL. We should be able to use it with SPIR-V intrinsics. @jeffbolznv has used them before, but not for an operation that needed to access registers. Do you know if this is possible?
int8 tensor cores are available directly in GLSL and should be straightforward to use. I just have to figure out the math for quantized integer matmul first.

@JohannesGaessler
Copy link
Collaborator

I forgot to mention: I got sidetracked with a refactor of the GGUF code but I am still working on llama.cpp training. I think one of the more relevant use cases will be training LoRAs on top of quantized models. Due to the high memory requirements of training good performance for small batch sizes will be doubly important (but the current int8-based CUDA code will not work for transposed matrices I think).

I think a good next step would be looking into q8_1 for the activations and int8 for the multiplications, for general matrix multiply.

I should mention though that I have never been able to get more than ~40% utilization of int8 tensor cores (on RTX 3090/4090). The throughput of int8 is 2x that of FP16 so I have effectively only been able to achieve ~80% of the maximum theoretical FP16 throughput. This could simply be due to my own inadequacies and it's very possible that if I had used FP16 tensor cores the utilization would have been similarly low. For NVIDIA GPUs without tensor cores the use of __dp4a is easily faster than floating point arithmetic.

I just have to figure out the math for quantized integer matmul first.

I can talk you through how to do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misc. bug: Vulkan backend with 7900XTX has severe performance dropoff at some batch sizes
6 participants