| Loop Id: 127 | Module: exec | Source: accelerate_kernel.f90:62-76 | Coverage: 5.28% |
|---|
| Loop Id: 127 | Module: exec | Source: accelerate_kernel.f90:62-76 | Coverage: 5.28% |
|---|
0x427900 VMOVUPD -0x8(%RDX,%R8,8),%ZMM1 [9] |
0x42790b VMOVUPD (%RDX,%R8,8),%ZMM2 [9] |
0x427912 VMULPD -0x8(%R11,%R8,8),%ZMM1,%ZMM1 [3] |
0x42791d VFMADD231PD (%R11,%R8,8),%ZMM2,%ZMM1 [3] |
0x427924 VMOVUPD -0x8(%RCX,%R8,8),%ZMM2 [6] |
0x42792f VMOVUPD (%RCX,%R8,8),%ZMM4 [6] |
0x427936 VFMADD132PD (%RSI,%R8,8),%ZMM1,%ZMM4 [1] |
0x42793d VFMADD231PD -0x8(%RSI,%R8,8),%ZMM2,%ZMM4 [1] |
0x427948 VMULPD %ZMM3,%ZMM4,%ZMM1 |
0x42794e VDIVPD %ZMM1,%ZMM0,%ZMM1 |
0x427954 VMOVUPD (%RDI,%R8,8),%ZMM2 [8] |
0x42795b VMOVUPD -0x8(%RBX,%R8,8),%ZMM4 [4] |
0x427966 VMOVUPD (%RBX,%R8,8),%ZMM5 [4] |
0x42796d VSUBPD %ZMM5,%ZMM4,%ZMM18 |
0x427973 VMULPD %ZMM2,%ZMM18,%ZMM18 |
0x427979 VMOVUPD (%R15,%R8,8),%ZMM19 [2] |
0x427980 VMOVUPD -0x8(%RAX,%R8,8),%ZMM20 [16] |
0x42798b VMOVUPD (%RAX,%R8,8),%ZMM21 [16] |
0x427992 VSUBPD %ZMM21,%ZMM20,%ZMM22 |
0x427998 VFMADD213PD %ZMM18,%ZMM19,%ZMM22 |
0x42799e VMOVUPD -0x8(%R9,%R8,8),%ZMM18 [5] |
0x4279a9 VMOVUPD (%R9,%R8,8),%ZMM23 [5] |
0x4279b0 VSUBPD %ZMM5,%ZMM21,%ZMM5 |
0x4279b6 VMULPD %ZMM5,%ZMM23,%ZMM5 |
0x4279bc VSUBPD %ZMM4,%ZMM20,%ZMM4 |
0x4279c2 VFMADD213PD %ZMM5,%ZMM18,%ZMM4 |
0x4279c8 VMOVUPD -0x8(%R13,%R8,8),%ZMM5 [10] |
0x4279d3 VMOVUPD (%R13,%R8,8),%ZMM20 [10] |
0x4279db VSUBPD %ZMM20,%ZMM5,%ZMM21 |
0x4279e1 VMOVUPD -0x8(%R12,%R8,8),%ZMM26 [14] |
0x4279ec VMOVUPD (%R12,%R8,8),%ZMM27 [14] |
0x4279f3 VSUBPD %ZMM27,%ZMM26,%ZMM28 |
0x4279f9 VFMADD213PD %ZMM22,%ZMM2,%ZMM21 |
0x4279ff VFMADD231PD %ZMM28,%ZMM19,%ZMM21 |
0x427a05 VFMADD213PD (%R14,%R8,8),%ZMM1,%ZMM21 [11] |
0x427a0c MOV 0x1f8(%RSP),%R10 [12] |
0x427a14 VMOVUPD %ZMM21,(%R10,%R8,8) [13] |
0x427a1b VSUBPD %ZMM20,%ZMM27,%ZMM2 |
0x427a21 VSUBPD %ZMM5,%ZMM26,%ZMM5 |
0x427a27 VFMADD213PD %ZMM4,%ZMM23,%ZMM2 |
0x427a2d VFMADD231PD %ZMM5,%ZMM18,%ZMM2 |
0x427a33 MOV 0x38(%RSP),%R10 [12] |
0x427a38 VFMADD213PD (%R10,%R8,8),%ZMM1,%ZMM2 [15] |
0x427a3f MOV 0x20(%RSP),%R10 [12] |
0x427a44 VMOVUPD %ZMM2,(%R10,%R8,8) [7] |
0x427a4b ADD $0x8,%R8 |
0x427a4f CMP 0x40(%RSP),%R8 [12] |
0x427a54 JB 427900 |
/scratch_na/users/xoserete/qaas_runs/171-415-7919/intel/CloverLeafFC/build/CloverLeafFC/CloverLeaf_ref/kernels/accelerate_kernel.f90: 62 - 76 |
-------------------------------------------------------------------------------- |
62: DO j=x_min,x_max+1 |
63: stepbymass_s=halfdt/((density0(j-1,k-1)*volume(j-1,k-1) & |
64: +density0(j ,k-1)*volume(j ,k-1) & |
65: +density0(j ,k )*volume(j ,k ) & |
66: +density0(j-1,k )*volume(j-1,k )) & |
67: *0.25_8) |
68: |
69: xvel1(j,k)=xvel0(j,k)-stepbymass_s*(xarea(j ,k )*(pressure(j ,k )-pressure(j-1,k )) & |
70: +xarea(j ,k-1)*(pressure(j ,k-1)-pressure(j-1,k-1))) |
71: yvel1(j,k)=yvel0(j,k)-stepbymass_s*(yarea(j ,k )*(pressure(j ,k )-pressure(j ,k-1)) & |
72: +yarea(j-1,k )*(pressure(j-1,k )-pressure(j-1,k-1))) |
73: xvel1(j,k)=xvel1(j,k)-stepbymass_s*(xarea(j ,k )*(viscosity(j ,k )-viscosity(j-1,k )) & |
74: +xarea(j ,k-1)*(viscosity(j ,k-1)-viscosity(j-1,k-1))) |
75: yvel1(j,k)=yvel1(j,k)-stepbymass_s*(yarea(j ,k )*(viscosity(j ,k )-viscosity(j ,k-1)) & |
76: +yarea(j-1,k )*(viscosity(j-1,k )-viscosity(j-1,k-1))) |
| Coverage (%) | Name | Source Location | Module |
|---|---|---|---|
| ►100.00+ | __kmp_invoke_microtask | libiomp5.so | |
| ○ | __kmp_invoke_task_func | libiomp5.so |
| Path / |
| Metric | Value |
|---|---|
| CQA speedup if no scalar integer | 1.00 |
| CQA speedup if FP arith vectorized | 1.00 |
| CQA speedup if fully vectorized | 1.00 |
| CQA speedup if no inter-iteration dependency | NA |
| CQA speedup if next bottleneck killed | 1.23 |
| Bottlenecks | P0, |
| Function | accelerate_kernel_.DIR.OMP.PARALLEL.2 |
| Source | accelerate_kernel.f90:62-76 |
| Source loop unroll info | not unrolled or unrolled with no peel/tail loop |
| Source loop unroll confidence level | max |
| Unroll/vectorization loop type | NA |
| Unroll factor | NA |
| CQA cycles | 16.00 |
| CQA cycles if no scalar integer | 16.00 |
| CQA cycles if FP arith vectorized | 16.00 |
| CQA cycles if fully vectorized | 16.00 |
| Front-end cycles | 9.17 |
| DIV/SQRT cycles | 13.00 |
| P0 cycles | 11.50 |
| P1 cycles | 8.67 |
| P2 cycles | 8.67 |
| P3 cycles | 1.00 |
| P4 cycles | 13.00 |
| P5 cycles | 1.00 |
| P6 cycles | 1.00 |
| P7 cycles | 1.00 |
| P8 cycles | 1.00 |
| P9 cycles | 0.00 |
| P10 cycles | 8.67 |
| P11 cycles | 16.00 |
| Inter-iter dependencies cycles | 1 |
| FE+BE cycles (UFS) | 16.47 - 17.10 |
| Stall cycles (UFS) | 6.65 - 7.28 |
| Nb insns | 48.00 |
| Nb uops | 49.00 |
| Nb loads | 26.00 |
| Nb stores | 2.00 |
| Nb stack references | 4.00 |
| FLOP/cycle | 17.50 |
| Nb FLOP add-sub | 64.00 |
| Nb FLOP mul | 32.00 |
| Nb FLOP fma | 88.00 |
| Nb FLOP div | 8.00 |
| Nb FLOP rcp | 0.00 |
| Nb FLOP sqrt | 0.00 |
| Nb FLOP rsqrt | 0.00 |
| Bytes/cycle | 98.00 |
| Bytes prefetched | 0.00 |
| Bytes loaded | 1440.00 |
| Bytes stored | 128.00 |
| Stride 0 | 1.00 |
| Stride 1 | 3.00 |
| Stride n | 9.00 |
| Stride unknown | 1.00 |
| Stride indirect | 0.00 |
| Vectorization ratio all | 100.00 |
| Vectorization ratio load | 100.00 |
| Vectorization ratio store | 100.00 |
| Vectorization ratio mul | 100.00 |
| Vectorization ratio add_sub | 100.00 |
| Vectorization ratio fma | 100.00 |
| Vectorization ratio div_sqrt | 100.00 |
| Vectorization ratio other | NA |
| Vector-efficiency ratio all | 100.00 |
| Vector-efficiency ratio load | 100.00 |
| Vector-efficiency ratio store | 100.00 |
| Vector-efficiency ratio mul | 100.00 |
| Vector-efficiency ratio add_sub | 100.00 |
| Vector-efficiency ratio fma | 100.00 |
| Vector-efficiency ratio div_sqrt | 100.00 |
| Vector-efficiency ratio other | NA |
| Metric | Value |
|---|---|
| CQA speedup if no scalar integer | 1.00 |
| CQA speedup if FP arith vectorized | 1.00 |
| CQA speedup if fully vectorized | 1.00 |
| CQA speedup if no inter-iteration dependency | NA |
| CQA speedup if next bottleneck killed | 1.23 |
| Bottlenecks | P0, |
| Function | accelerate_kernel_.DIR.OMP.PARALLEL.2 |
| Source | accelerate_kernel.f90:62-76 |
| Source loop unroll info | not unrolled or unrolled with no peel/tail loop |
| Source loop unroll confidence level | max |
| Unroll/vectorization loop type | NA |
| Unroll factor | NA |
| CQA cycles | 16.00 |
| CQA cycles if no scalar integer | 16.00 |
| CQA cycles if FP arith vectorized | 16.00 |
| CQA cycles if fully vectorized | 16.00 |
| Front-end cycles | 9.17 |
| DIV/SQRT cycles | 13.00 |
| P0 cycles | 11.50 |
| P1 cycles | 8.67 |
| P2 cycles | 8.67 |
| P3 cycles | 1.00 |
| P4 cycles | 13.00 |
| P5 cycles | 1.00 |
| P6 cycles | 1.00 |
| P7 cycles | 1.00 |
| P8 cycles | 1.00 |
| P9 cycles | 0.00 |
| P10 cycles | 8.67 |
| P11 cycles | 16.00 |
| Inter-iter dependencies cycles | 1 |
| FE+BE cycles (UFS) | 16.47 - 17.10 |
| Stall cycles (UFS) | 6.65 - 7.28 |
| Nb insns | 48.00 |
| Nb uops | 49.00 |
| Nb loads | 26.00 |
| Nb stores | 2.00 |
| Nb stack references | 4.00 |
| FLOP/cycle | 17.50 |
| Nb FLOP add-sub | 64.00 |
| Nb FLOP mul | 32.00 |
| Nb FLOP fma | 88.00 |
| Nb FLOP div | 8.00 |
| Nb FLOP rcp | 0.00 |
| Nb FLOP sqrt | 0.00 |
| Nb FLOP rsqrt | 0.00 |
| Bytes/cycle | 98.00 |
| Bytes prefetched | 0.00 |
| Bytes loaded | 1440.00 |
| Bytes stored | 128.00 |
| Stride 0 | 1.00 |
| Stride 1 | 3.00 |
| Stride n | 9.00 |
| Stride unknown | 1.00 |
| Stride indirect | 0.00 |
| Vectorization ratio all | 100.00 |
| Vectorization ratio load | 100.00 |
| Vectorization ratio store | 100.00 |
| Vectorization ratio mul | 100.00 |
| Vectorization ratio add_sub | 100.00 |
| Vectorization ratio fma | 100.00 |
| Vectorization ratio div_sqrt | 100.00 |
| Vectorization ratio other | NA |
| Vector-efficiency ratio all | 100.00 |
| Vector-efficiency ratio load | 100.00 |
| Vector-efficiency ratio store | 100.00 |
| Vector-efficiency ratio mul | 100.00 |
| Vector-efficiency ratio add_sub | 100.00 |
| Vector-efficiency ratio fma | 100.00 |
| Vector-efficiency ratio div_sqrt | 100.00 |
| Vector-efficiency ratio other | NA |
| Path / |
| Function | accelerate_kernel_.DIR.OMP.PARALLEL.2 |
| Source file and lines | accelerate_kernel.f90:62-76 |
| Module | exec |
| nb instructions | 48 |
| nb uops | 49 |
| loop length | 346 |
| used x86 registers | 15 |
| used mmx registers | 0 |
| used xmm registers | 0 |
| used ymm registers | 0 |
| used zmm registers | 15 |
| nb stack references | 4 |
| ADD-SUB / MUL ratio | 2.00 |
| micro-operation queue | 9.17 cycles |
| front end | 9.17 cycles |
| P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| uops | 13.00 | 0.00 | 8.67 | 8.67 | 1.00 | 13.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 8.67 |
| cycles | 13.00 | 11.50 | 8.67 | 8.67 | 1.00 | 13.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 8.67 |
| Cycles executing div or sqrt instructions | 16.00 |
| Longest recurrence chain latency (RecMII) | 1.00 |
| FE+BE cycles | 16.47-17.10 |
| Stall cycles | 6.65-7.28 |
| LB full (events) | 7.30-7.93 |
| Front-end | 9.17 |
| Dispatch | 13.00 |
| DIV/SQRT | 16.00 |
| Data deps. | 1.00 |
| Overall L1 | 16.00 |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | 100% |
| add-sub | 100% |
| fma | 100% |
| div/sqrt | 100% |
| other | NA (no other vectorizable/vectorized instructions) |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | 100% |
| add-sub | 100% |
| fma | 100% |
| div/sqrt | 100% |
| other | NA (no other vectorizable/vectorized instructions) |
| Instruction | Nb FU | P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | Latency | Recip. throughput |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| VMOVUPD -0x8(%RDX,%R8,8),%ZMM1 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RDX,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMULPD -0x8(%R11,%R8,8),%ZMM1,%ZMM1 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VFMADD231PD (%R11,%R8,8),%ZMM2,%ZMM1 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VMOVUPD -0x8(%RCX,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RCX,%R8,8),%ZMM4 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VFMADD132PD (%RSI,%R8,8),%ZMM1,%ZMM4 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VFMADD231PD -0x8(%RSI,%R8,8),%ZMM2,%ZMM4 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VMULPD %ZMM3,%ZMM4,%ZMM1 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VDIVPD %ZMM1,%ZMM0,%ZMM1 | 3 | 2.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 22-24 | 16 |
| VMOVUPD (%RDI,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD -0x8(%RBX,%R8,8),%ZMM4 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RBX,%R8,8),%ZMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM5,%ZMM4,%ZMM18 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMULPD %ZMM2,%ZMM18,%ZMM18 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD (%R15,%R8,8),%ZMM19 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD -0x8(%RAX,%R8,8),%ZMM20 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RAX,%R8,8),%ZMM21 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM21,%ZMM20,%ZMM22 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM18,%ZMM19,%ZMM22 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD -0x8(%R9,%R8,8),%ZMM18 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R9,%R8,8),%ZMM23 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM5,%ZMM21,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMULPD %ZMM5,%ZMM23,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VSUBPD %ZMM4,%ZMM20,%ZMM4 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM5,%ZMM18,%ZMM4 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD -0x8(%R13,%R8,8),%ZMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R13,%R8,8),%ZMM20 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM20,%ZMM5,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD -0x8(%R12,%R8,8),%ZMM26 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R12,%R8,8),%ZMM27 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM27,%ZMM26,%ZMM28 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM22,%ZMM2,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD231PD %ZMM28,%ZMM19,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD213PD (%R14,%R8,8),%ZMM1,%ZMM21 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| MOV 0x1f8(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VMOVUPD %ZMM21,(%R10,%R8,8) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 1 |
| VSUBPD %ZMM20,%ZMM27,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VSUBPD %ZMM5,%ZMM26,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM4,%ZMM23,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD231PD %ZMM5,%ZMM18,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| MOV 0x38(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VFMADD213PD (%R10,%R8,8),%ZMM1,%ZMM2 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| MOV 0x20(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VMOVUPD %ZMM2,(%R10,%R8,8) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 1 |
| ADD $0x8,%R8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17 |
| CMP 0x40(%RSP),%R8 | 1 | 0.20 | 0.20 | 0.33 | 0.33 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0.33 | 1 | 0.33 |
| JB 427900 <accelerate_kernel_module_mp_accelerate_kernel_.DIR.OMP.PARALLEL.2+0x5c0> | 1 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 0.50 |
| Function | accelerate_kernel_.DIR.OMP.PARALLEL.2 |
| Source file and lines | accelerate_kernel.f90:62-76 |
| Module | exec |
| nb instructions | 48 |
| nb uops | 49 |
| loop length | 346 |
| used x86 registers | 15 |
| used mmx registers | 0 |
| used xmm registers | 0 |
| used ymm registers | 0 |
| used zmm registers | 15 |
| nb stack references | 4 |
| ADD-SUB / MUL ratio | 2.00 |
| micro-operation queue | 9.17 cycles |
| front end | 9.17 cycles |
| P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| uops | 13.00 | 0.00 | 8.67 | 8.67 | 1.00 | 13.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 8.67 |
| cycles | 13.00 | 11.50 | 8.67 | 8.67 | 1.00 | 13.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.00 | 8.67 |
| Cycles executing div or sqrt instructions | 16.00 |
| Longest recurrence chain latency (RecMII) | 1.00 |
| FE+BE cycles | 16.47-17.10 |
| Stall cycles | 6.65-7.28 |
| LB full (events) | 7.30-7.93 |
| Front-end | 9.17 |
| Dispatch | 13.00 |
| DIV/SQRT | 16.00 |
| Data deps. | 1.00 |
| Overall L1 | 16.00 |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | 100% |
| add-sub | 100% |
| fma | 100% |
| div/sqrt | 100% |
| other | NA (no other vectorizable/vectorized instructions) |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | 100% |
| add-sub | 100% |
| fma | 100% |
| div/sqrt | 100% |
| other | NA (no other vectorizable/vectorized instructions) |
| Instruction | Nb FU | P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | Latency | Recip. throughput |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| VMOVUPD -0x8(%RDX,%R8,8),%ZMM1 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RDX,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMULPD -0x8(%R11,%R8,8),%ZMM1,%ZMM1 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VFMADD231PD (%R11,%R8,8),%ZMM2,%ZMM1 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VMOVUPD -0x8(%RCX,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RCX,%R8,8),%ZMM4 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VFMADD132PD (%RSI,%R8,8),%ZMM1,%ZMM4 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VFMADD231PD -0x8(%RSI,%R8,8),%ZMM2,%ZMM4 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| VMULPD %ZMM3,%ZMM4,%ZMM1 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VDIVPD %ZMM1,%ZMM0,%ZMM1 | 3 | 2.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 22-24 | 16 |
| VMOVUPD (%RDI,%R8,8),%ZMM2 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD -0x8(%RBX,%R8,8),%ZMM4 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RBX,%R8,8),%ZMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM5,%ZMM4,%ZMM18 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMULPD %ZMM2,%ZMM18,%ZMM18 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD (%R15,%R8,8),%ZMM19 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD -0x8(%RAX,%R8,8),%ZMM20 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%RAX,%R8,8),%ZMM21 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM21,%ZMM20,%ZMM22 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM18,%ZMM19,%ZMM22 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD -0x8(%R9,%R8,8),%ZMM18 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R9,%R8,8),%ZMM23 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM5,%ZMM21,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMULPD %ZMM5,%ZMM23,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VSUBPD %ZMM4,%ZMM20,%ZMM4 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM5,%ZMM18,%ZMM4 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VMOVUPD -0x8(%R13,%R8,8),%ZMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R13,%R8,8),%ZMM20 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM20,%ZMM5,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD -0x8(%R12,%R8,8),%ZMM26 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VMOVUPD (%R12,%R8,8),%ZMM27 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.50 |
| VSUBPD %ZMM27,%ZMM26,%ZMM28 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM22,%ZMM2,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD231PD %ZMM28,%ZMM19,%ZMM21 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD213PD (%R14,%R8,8),%ZMM1,%ZMM21 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| MOV 0x1f8(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VMOVUPD %ZMM21,(%R10,%R8,8) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 1 |
| VSUBPD %ZMM20,%ZMM27,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VSUBPD %ZMM5,%ZMM26,%ZMM5 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VFMADD213PD %ZMM4,%ZMM23,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| VFMADD231PD %ZMM5,%ZMM18,%ZMM2 | 1 | 0.50 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 0.50 |
| MOV 0x38(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VFMADD213PD (%R10,%R8,8),%ZMM1,%ZMM2 | 1 | 0.50 | 0 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 4 | 0.50 |
| MOV 0x20(%RSP),%R10 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VMOVUPD %ZMM2,(%R10,%R8,8) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 1 |
| ADD $0x8,%R8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17 |
| CMP 0x40(%RSP),%R8 | 1 | 0.20 | 0.20 | 0.33 | 0.33 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0.33 | 1 | 0.33 |
| JB 427900 <accelerate_kernel_module_mp_accelerate_kernel_.DIR.OMP.PARALLEL.2+0x5c0> | 1 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 0.50 |
