3 # ====================================================================
4 # Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
5 # project. The module is, however, dual licensed under OpenSSL and
6 # CRYPTOGAMS licenses depending on where you obtain it. For further
7 # details see http://www.openssl.org/~appro/cryptogams/.
8 # ====================================================================
12 # The module implements "4-bit" GCM GHASH function and underlying
13 # single multiplication operation in GF(2^128). "4-bit" means that it
14 # uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
15 # code paths: vanilla x86 and vanilla MMX. Former will be executed on
16 # 486 and Pentium, latter on all others. Performance results are for
17 # streamed GHASH subroutine and are expressed in cycles per processed
18 # byte, less is better:
20 # gcc 2.95.3(*) MMX assembler x86 assembler
22 # Pentium 100/112(**) - 50
24 # P4 96 /122 24.5 84(***)
25 # Opteron 50 /71 14.5 30
26 # Core2 54 /68 10.5 18
28 # (*) gcc 3.4.x was observed to generate few percent slower code,
29 # which is one of reasons why 2.95.3 results were chosen,
30 # another reason is lack of 3.4.x results for older CPUs;
31 # (**) second number is result for code compiled with -fPIC flag,
32 # which is actually more relevant, because assembler code is
33 # position-independent;
34 # (***) see comment in non-MMX routine for further details;
36 # To summarize, it's >2-4 times faster than gcc-generated code. To
37 # anchor it to something else SHA1 assembler processes one byte in
38 # 11-13 cycles on contemporary x86 cores. As for choice of MMX in
39 # particular, see comment at the end of the file...
43 # Add PCLMULQDQ version performing at 2.13 cycles per processed byte.
44 # The question is how close is it to theoretical limit? The pclmulqdq
45 # instruction latency appears to be 14 cycles and there can't be more
46 # than 2 of them executing at any given time. This means that single
47 # Karatsuba multiplication would take 28 cycles *plus* few cycles for
48 # pre- and post-processing. Then multiplication has to be followed by
49 # modulo-reduction. Given that aggregated reduction method [see
50 # "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
51 # white paper by Intel] allows you to perform reduction only once in
52 # a while we can assume that asymptotic performance can be estimated
53 # as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
54 # and Naggr is the aggregation factor.
56 # Before we proceed to this implementation let's have closer look at
57 # the best-performing code suggested by Intel in their white paper.
58 # By tracing inter-register dependencies Tmod is estimated as ~19
59 # cycles and Naggr is 4, resulting in 2.05 cycles per processed byte.
60 # As implied, this is quite optimistic estimate, because it does not
61 # account for Karatsuba pre- and post-processing, which for a single
62 # multiplication is ~5 cycles. Unfortunately Intel does not provide
63 # performance data for GHASH alone, only for fused GCM mode. But
64 # we can estimate it by subtracting CTR performance result provided
65 # in "AES Instruction Set" white paper: 3.54-1.38=2.16 cycles per
66 # processed byte or 5% off the estimate. It should be noted though
67 # that 3.54 is GCM result for 16KB block size, while 1.38 is CTR for
68 # 1KB block size, meaning that real number is likely to be a bit
69 # further from estimate.
71 # Moving on to the implementation in question. Tmod is estimated as
72 # ~13 cycles and Naggr is 2, giving asymptotic performance of ...
73 # 2.16. How is it possible that measured performance is better than
74 # optimistic theoretical estimate? There is one thing Intel failed
75 # to recognize. By fusing GHASH with CTR former's performance is
76 # really limited to above (Tmul + Tmod/Naggr) equation. But if GHASH
77 # procedure is detached, the modulo-reduction can be interleaved with
78 # Naggr-1 multiplications and under ideal conditions even disappear
79 # from the equation. So that optimistic theoretical estimate for this
80 # implementation is ... 28/16=1.75, and not 2.16. Well, it's probably
81 # way too optimistic, at least for such small Naggr. I'd argue that
82 # (28+Tproc/Naggr), where Tproc is time required for Karatsuba pre-
83 # and post-processing, is more realistic estimate. In this case it
84 # gives ... 1.91 cycles per processed byte. Or in other words,
85 # depending on how well we can interleave reduction and one of the
86 # two multiplications the performance should be betwen 1.91 and 2.16.
87 # As already mentioned, this implementation processes one byte [out
88 # of 1KB buffer] in 2.13 cycles, while x86_64 counterpart - in 2.07.
89 # x86_64 performance is better, because larger register bank allows
90 # to interleave reduction and multiplication better.
92 # Does it make sense to increase Naggr? To start with it's virtually
93 # impossible in 32-bit mode, because of limited register bank
94 # capacity. Otherwise improvement has to be weighed agiainst slower
95 # setup, as well as code size and complexity increase. As even
96 # optimistic estimate doesn't promise 30% performance improvement,
97 # there are currently no plans to increase Naggr.
99 # Special thanks to David Woodhouse <dwmw2@infradead.org> for
100 # providing access to a Westmere-based system on behalf of Intel
101 # Open Source Technology Centre.
103 $0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
104 push(@INC,"${dir}","${dir}../../perlasm");
107 &asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386");
110 for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
112 ($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
116 $unroll = 0; # Affects x86 loop. Folded loop performs ~7% worse
117 # than unrolled, which has to be weighted against
118 # 2.5x x86-specific code size reduction.
124 &mov ($Zhh,&DWP(4,$Htbl,$Zll));
125 &mov ($Zhl,&DWP(0,$Htbl,$Zll));
126 &mov ($Zlh,&DWP(12,$Htbl,$Zll));
127 &mov ($Zll,&DWP(8,$Htbl,$Zll));
128 &xor ($rem,$rem); # avoid partial register stalls on PIII
130 # shrd practically kills P4, 2.5x deterioration, but P4 has
131 # MMX code-path to execute. shrd runs tad faster [than twice
132 # the shifts, move's and or's] on pre-MMX Pentium (as well as
133 # PIII and Core2), *but* minimizes code size, spares register
134 # and thus allows to fold the loop...
138 &jmp (&label("x86_loop"));
139 &set_label("x86_loop",16);
140 for($i=1;$i<=2;$i++) {
141 &mov (&LB($rem),&LB($Zll));
143 &and (&LB($rem),0xf);
147 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
149 &mov (&LB($rem),&BP($off,"esp",$cnt));
151 &and (&LB($rem),0xf0);
156 &xor ($Zll,&DWP(8,$Htbl,$rem));
157 &xor ($Zlh,&DWP(12,$Htbl,$rem));
158 &xor ($Zhl,&DWP(0,$Htbl,$rem));
159 &xor ($Zhh,&DWP(4,$Htbl,$rem));
163 &js (&label("x86_break"));
165 &jmp (&label("x86_loop"));
168 &set_label("x86_break",16);
170 for($i=1;$i<32;$i++) {
172 &mov (&LB($rem),&LB($Zll));
174 &and (&LB($rem),0xf);
178 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
181 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
182 &and (&LB($rem),0xf0);
184 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
188 &xor ($Zll,&DWP(8,$Htbl,$rem));
189 &xor ($Zlh,&DWP(12,$Htbl,$rem));
190 &xor ($Zhl,&DWP(0,$Htbl,$rem));
191 &xor ($Zhh,&DWP(4,$Htbl,$rem));
207 &function_begin_B("_x86_gmult_4bit_inner");
210 &function_end_B("_x86_gmult_4bit_inner");
213 sub deposit_rem_4bit {
216 &mov (&DWP($bias+0, "esp"),0x0000<<16);
217 &mov (&DWP($bias+4, "esp"),0x1C20<<16);
218 &mov (&DWP($bias+8, "esp"),0x3840<<16);
219 &mov (&DWP($bias+12,"esp"),0x2460<<16);
220 &mov (&DWP($bias+16,"esp"),0x7080<<16);
221 &mov (&DWP($bias+20,"esp"),0x6CA0<<16);
222 &mov (&DWP($bias+24,"esp"),0x48C0<<16);
223 &mov (&DWP($bias+28,"esp"),0x54E0<<16);
224 &mov (&DWP($bias+32,"esp"),0xE100<<16);
225 &mov (&DWP($bias+36,"esp"),0xFD20<<16);
226 &mov (&DWP($bias+40,"esp"),0xD940<<16);
227 &mov (&DWP($bias+44,"esp"),0xC560<<16);
228 &mov (&DWP($bias+48,"esp"),0x9180<<16);
229 &mov (&DWP($bias+52,"esp"),0x8DA0<<16);
230 &mov (&DWP($bias+56,"esp"),0xA9C0<<16);
231 &mov (&DWP($bias+60,"esp"),0xB5E0<<16);
234 $suffix = $x86only ? "" : "_x86";
236 &function_begin("gcm_gmult_4bit".$suffix);
237 &stack_push(16+4+1); # +1 for stack alignment
238 &mov ($inp,&wparam(0)); # load Xi
239 &mov ($Htbl,&wparam(1)); # load Htable
241 &mov ($Zhh,&DWP(0,$inp)); # load Xi[16]
242 &mov ($Zhl,&DWP(4,$inp));
243 &mov ($Zlh,&DWP(8,$inp));
244 &mov ($Zll,&DWP(12,$inp));
246 &deposit_rem_4bit(16);
248 &mov (&DWP(0,"esp"),$Zhh); # copy Xi[16] on stack
249 &mov (&DWP(4,"esp"),$Zhl);
250 &mov (&DWP(8,"esp"),$Zlh);
251 &mov (&DWP(12,"esp"),$Zll);
256 &call ("_x86_gmult_4bit_inner");
259 &mov ($inp,&wparam(0));
262 &mov (&DWP(12,$inp),$Zll);
263 &mov (&DWP(8,$inp),$Zlh);
264 &mov (&DWP(4,$inp),$Zhl);
265 &mov (&DWP(0,$inp),$Zhh);
267 &function_end("gcm_gmult_4bit".$suffix);
269 &function_begin("gcm_ghash_4bit".$suffix);
270 &stack_push(16+4+1); # +1 for 64-bit alignment
271 &mov ($Zll,&wparam(0)); # load Xi
272 &mov ($Htbl,&wparam(1)); # load Htable
273 &mov ($inp,&wparam(2)); # load in
274 &mov ("ecx",&wparam(3)); # load len
276 &mov (&wparam(3),"ecx");
278 &mov ($Zhh,&DWP(0,$Zll)); # load Xi[16]
279 &mov ($Zhl,&DWP(4,$Zll));
280 &mov ($Zlh,&DWP(8,$Zll));
281 &mov ($Zll,&DWP(12,$Zll));
283 &deposit_rem_4bit(16);
285 &set_label("x86_outer_loop",16);
286 &xor ($Zll,&DWP(12,$inp)); # xor with input
287 &xor ($Zlh,&DWP(8,$inp));
288 &xor ($Zhl,&DWP(4,$inp));
289 &xor ($Zhh,&DWP(0,$inp));
290 &mov (&DWP(12,"esp"),$Zll); # dump it on stack
291 &mov (&DWP(8,"esp"),$Zlh);
292 &mov (&DWP(4,"esp"),$Zhl);
293 &mov (&DWP(0,"esp"),$Zhh);
299 &call ("_x86_gmult_4bit_inner");
302 &mov ($inp,&wparam(2));
304 &lea ($inp,&DWP(16,$inp));
305 &cmp ($inp,&wparam(3));
306 &mov (&wparam(2),$inp) if (!$unroll);
307 &jb (&label("x86_outer_loop"));
309 &mov ($inp,&wparam(0)); # load Xi
310 &mov (&DWP(12,$inp),$Zll);
311 &mov (&DWP(8,$inp),$Zlh);
312 &mov (&DWP(4,$inp),$Zhl);
313 &mov (&DWP(0,$inp),$Zhh);
315 &function_end("gcm_ghash_4bit".$suffix);
319 &static_label("rem_4bit");
321 &function_begin_B("_mmx_gmult_4bit_inner");
322 # MMX version performs 3.5 times better on P4 (see comment in non-MMX
323 # routine for further details), 100% better on Opteron, ~70% better
324 # on Core2 and PIII... In other words effort is considered to be well
325 # spent... Since initial release the loop was unrolled in order to
326 # "liberate" register previously used as loop counter. Instead it's
327 # used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
328 # The path involves move of Z.lo from MMX to integer register,
329 # effective address calculation and finally merge of value to Z.hi.
330 # Reference to rem_4bit is scheduled so late that I had to >>4
331 # rem_4bit elements. This resulted in 20-45% procent improvement
332 # on contemporary µ-archs.
335 my $rem_4bit = "eax";
336 my @rem = ($Zhh,$Zll);
340 my ($Zlo,$Zhi) = ("mm0","mm1");
343 &xor ($nlo,$nlo); # avoid partial register stalls on PIII
345 &mov (&LB($nlo),&LB($nhi));
348 &movq ($Zlo,&QWP(8,$Htbl,$nlo));
349 &movq ($Zhi,&QWP(0,$Htbl,$nlo));
350 &movd ($rem[0],$Zlo);
352 for ($cnt=28;$cnt>=-2;$cnt--) {
354 my $nix = $odd ? $nlo : $nhi;
356 &shl (&LB($nlo),4) if ($odd);
360 &pxor ($Zlo,&QWP(8,$Htbl,$nix));
361 &mov (&LB($nlo),&BP($cnt/2,$inp)) if (!$odd && $cnt>=0);
363 &and ($nhi,0xf0) if ($odd);
364 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
366 &pxor ($Zhi,&QWP(0,$Htbl,$nix));
367 &mov ($nhi,$nlo) if (!$odd && $cnt>=0);
368 &movd ($rem[1],$Zlo);
371 push (@rem,shift(@rem)); # "rotate" registers
374 &mov ($inp,&DWP(4,$rem_4bit,$rem[1],8)); # last rem_4bit[rem]
376 &psrlq ($Zlo,32); # lower part of Zlo is already there
381 &shl ($inp,4); # compensate for rem_4bit[i] being >>4
391 &function_end_B("_mmx_gmult_4bit_inner");
393 &function_begin("gcm_gmult_4bit_mmx");
394 &mov ($inp,&wparam(0)); # load Xi
395 &mov ($Htbl,&wparam(1)); # load Htable
397 &call (&label("pic_point"));
398 &set_label("pic_point");
400 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
402 &movz ($Zll,&BP(15,$inp));
404 &call ("_mmx_gmult_4bit_inner");
406 &mov ($inp,&wparam(0)); # load Xi
408 &mov (&DWP(12,$inp),$Zll);
409 &mov (&DWP(4,$inp),$Zhl);
410 &mov (&DWP(8,$inp),$Zlh);
411 &mov (&DWP(0,$inp),$Zhh);
412 &function_end("gcm_gmult_4bit_mmx");
414 # Streamed version performs 20% better on P4, 7% on Opteron,
415 # 10% on Core2 and PIII...
416 &function_begin("gcm_ghash_4bit_mmx");
417 &mov ($Zhh,&wparam(0)); # load Xi
418 &mov ($Htbl,&wparam(1)); # load Htable
419 &mov ($inp,&wparam(2)); # load in
420 &mov ($Zlh,&wparam(3)); # load len
422 &call (&label("pic_point"));
423 &set_label("pic_point");
425 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
428 &mov (&wparam(3),$Zlh); # len to point at the end of input
429 &stack_push(4+1); # +1 for stack alignment
431 &mov ($Zll,&DWP(12,$Zhh)); # load Xi[16]
432 &mov ($Zhl,&DWP(4,$Zhh));
433 &mov ($Zlh,&DWP(8,$Zhh));
434 &mov ($Zhh,&DWP(0,$Zhh));
435 &jmp (&label("mmx_outer_loop"));
437 &set_label("mmx_outer_loop",16);
438 &xor ($Zll,&DWP(12,$inp));
439 &xor ($Zhl,&DWP(4,$inp));
440 &xor ($Zlh,&DWP(8,$inp));
441 &xor ($Zhh,&DWP(0,$inp));
442 &mov (&wparam(2),$inp);
443 &mov (&DWP(12,"esp"),$Zll);
444 &mov (&DWP(4,"esp"),$Zhl);
445 &mov (&DWP(8,"esp"),$Zlh);
446 &mov (&DWP(0,"esp"),$Zhh);
451 &call ("_mmx_gmult_4bit_inner");
453 &mov ($inp,&wparam(2));
454 &lea ($inp,&DWP(16,$inp));
455 &cmp ($inp,&wparam(3));
456 &jb (&label("mmx_outer_loop"));
458 &mov ($inp,&wparam(0)); # load Xi
460 &mov (&DWP(12,$inp),$Zll);
461 &mov (&DWP(4,$inp),$Zhl);
462 &mov (&DWP(8,$inp),$Zlh);
463 &mov (&DWP(0,$inp),$Zhh);
466 &function_end("gcm_ghash_4bit_mmx");
469 ######################################################################
478 ($Xi,$Xhi)=("xmm0","xmm1"); $Hkey="xmm2";
479 ($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
480 ($Xn,$Xhn)=("xmm6","xmm7");
482 &static_label("bswap");
484 sub clmul64x64_T2 { # minimal "register" pressure
485 my ($Xhi,$Xi,$Hkey)=@_;
487 &movdqa ($Xhi,$Xi); #
488 &pshufd ($T1,$Xi,0b01001110);
489 &pshufd ($T2,$Hkey,0b01001110);
493 &pclmulqdq ($Xi,$Hkey,0x00); #######
494 &pclmulqdq ($Xhi,$Hkey,0x11); #######
495 &pclmulqdq ($T1,$T2,0x00); #######
507 # Even though this subroutine offers visually better ILP, it
508 # was empirically found to be a tad slower than above version.
509 # At least in gcm_ghash_clmul context. But it's just as well,
510 # because loop modulo-scheduling is possible only thanks to
511 # minimized "register" pressure...
512 my ($Xhi,$Xi,$Hkey)=@_;
516 &pclmulqdq ($Xi,$Hkey,0x00); #######
517 &pclmulqdq ($Xhi,$Hkey,0x11); #######
518 &pshufd ($T2,$T1,0b01001110); #
519 &pshufd ($T3,$Hkey,0b01001110);
522 &pclmulqdq ($T2,$T3,0x00); #######
533 if (1) { # Algorithm 9 with <<1 twist.
534 # Reduction is shorter and uses only two
535 # temporary registers, which makes it better
536 # candidate for interleaving with 64x64
537 # multiplication. Pre-modulo-scheduled loop
538 # was found to be ~20% faster than Algorithm 5
539 # below. Algorithm 9 was therefore chosen for
540 # further optimization...
542 sub reduction_alg9 { # 17/13 times faster than Intel version
569 &function_begin_B("gcm_init_clmul");
570 &mov ($Htbl,&wparam(0));
571 &mov ($Xip,&wparam(1));
573 &call (&label("pic"));
576 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
578 &movdqu ($Hkey,&QWP(0,$Xip));
579 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
582 &pshufd ($T2,$Hkey,0b11111111); # broadcast uppermost dword
587 &pcmpgtd ($T3,$T2); # broadcast carry bit
589 &por ($Hkey,$T1); # H<<=1
592 &pand ($T3,&QWP(16,$const)); # 0x1c2_polynomial
593 &pxor ($Hkey,$T3); # if(carry) H^=0x1c2_polynomial
597 &clmul64x64_T2 ($Xhi,$Xi,$Hkey);
598 &reduction_alg9 ($Xhi,$Xi);
600 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
601 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
604 &function_end_B("gcm_init_clmul");
606 &function_begin_B("gcm_gmult_clmul");
607 &mov ($Xip,&wparam(0));
608 &mov ($Htbl,&wparam(1));
610 &call (&label("pic"));
613 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
615 &movdqu ($Xi,&QWP(0,$Xip));
616 &movdqa ($T3,&QWP(0,$const));
617 &movdqu ($Hkey,&QWP(0,$Htbl));
620 &clmul64x64_T2 ($Xhi,$Xi,$Hkey);
621 &reduction_alg9 ($Xhi,$Xi);
624 &movdqu (&QWP(0,$Xip),$Xi);
627 &function_end_B("gcm_gmult_clmul");
629 &function_begin("gcm_ghash_clmul");
630 &mov ($Xip,&wparam(0));
631 &mov ($Htbl,&wparam(1));
632 &mov ($inp,&wparam(2));
633 &mov ($len,&wparam(3));
635 &call (&label("pic"));
638 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
640 &movdqu ($Xi,&QWP(0,$Xip));
641 &movdqa ($T3,&QWP(0,$const));
642 &movdqu ($Hkey,&QWP(0,$Htbl));
646 &jz (&label("odd_tail"));
649 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
650 # [(H*Ii+1) + (H*Xi+1)] mod P =
651 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
653 &movdqu ($T1,&QWP(0,$inp)); # Ii
654 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
657 &pxor ($Xi,$T1); # Ii+Xi
659 &clmul64x64_T2 ($Xhn,$Xn,$Hkey); # H*Ii+1
660 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
662 &lea ($inp,&DWP(32,$inp)); # i+=2
664 &jbe (&label("even_tail"));
666 &set_label("mod_loop");
667 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
668 &movdqu ($T1,&QWP(0,$inp)); # Ii
669 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
671 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
674 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
678 &movdqa ($T3,$Xn); #&clmul64x64_TX ($Xhn,$Xn,$Hkey); H*Ii+1
680 &pxor ($Xhi,$T1); # "Ii+Xi", consume early
682 &movdqa ($T1,$Xi) #&reduction_alg9($Xhi,$Xi); 1st phase
687 &pclmulqdq ($Xn,$Hkey,0x00); #######
693 &pshufd ($T1,$T3,0b01001110);
696 &pshufd ($T3,$Hkey,0b01001110);
699 &pclmulqdq ($Xhn,$Hkey,0x11); #######
700 &movdqa ($T2,$Xi); # 2nd phase
709 &pclmulqdq ($T1,$T3,0x00); #######
710 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
719 &movdqa ($T3,&QWP(0,$const));
721 &lea ($inp,&DWP(32,$inp));
723 &ja (&label("mod_loop"));
725 &set_label("even_tail");
726 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
728 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
731 &reduction_alg9 ($Xhi,$Xi);
734 &jnz (&label("done"));
736 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
737 &set_label("odd_tail");
738 &movdqu ($T1,&QWP(0,$inp)); # Ii
740 &pxor ($Xi,$T1); # Ii+Xi
742 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
743 &reduction_alg9 ($Xhi,$Xi);
747 &movdqu (&QWP(0,$Xip),$Xi);
748 &function_end("gcm_ghash_clmul");
750 } else { # Algorith 5. Kept for reference purposes.
752 sub reduction_alg5 { # 19/16 times faster than Intel version
797 &function_begin_B("gcm_init_clmul");
798 &mov ($Htbl,&wparam(0));
799 &mov ($Xip,&wparam(1));
801 &call (&label("pic"));
804 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
806 &movdqu ($Hkey,&QWP(0,$Xip));
807 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
811 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
812 &reduction_alg5 ($Xhi,$Xi);
814 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
815 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
818 &function_end_B("gcm_init_clmul");
820 &function_begin_B("gcm_gmult_clmul");
821 &mov ($Xip,&wparam(0));
822 &mov ($Htbl,&wparam(1));
824 &call (&label("pic"));
827 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
829 &movdqu ($Xi,&QWP(0,$Xip));
830 &movdqa ($Xn,&QWP(0,$const));
831 &movdqu ($Hkey,&QWP(0,$Htbl));
834 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
835 &reduction_alg5 ($Xhi,$Xi);
838 &movdqu (&QWP(0,$Xip),$Xi);
841 &function_end_B("gcm_gmult_clmul");
843 &function_begin("gcm_ghash_clmul");
844 &mov ($Xip,&wparam(0));
845 &mov ($Htbl,&wparam(1));
846 &mov ($inp,&wparam(2));
847 &mov ($len,&wparam(3));
849 &call (&label("pic"));
852 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
854 &movdqu ($Xi,&QWP(0,$Xip));
855 &movdqa ($T3,&QWP(0,$const));
856 &movdqu ($Hkey,&QWP(0,$Htbl));
860 &jz (&label("odd_tail"));
863 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
864 # [(H*Ii+1) + (H*Xi+1)] mod P =
865 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
867 &movdqu ($T1,&QWP(0,$inp)); # Ii
868 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
871 &pxor ($Xi,$T1); # Ii+Xi
873 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
874 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
877 &lea ($inp,&DWP(32,$inp)); # i+=2
878 &jbe (&label("even_tail"));
880 &set_label("mod_loop");
881 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
882 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
884 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
887 &reduction_alg5 ($Xhi,$Xi);
890 &movdqa ($T3,&QWP(0,$const));
891 &movdqu ($T1,&QWP(0,$inp)); # Ii
892 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
895 &pxor ($Xi,$T1); # Ii+Xi
897 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
898 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
901 &lea ($inp,&DWP(32,$inp));
902 &ja (&label("mod_loop"));
904 &set_label("even_tail");
905 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
907 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
910 &reduction_alg5 ($Xhi,$Xi);
912 &movdqa ($T3,&QWP(0,$const));
914 &jnz (&label("done"));
916 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
917 &set_label("odd_tail");
918 &movdqu ($T1,&QWP(0,$inp)); # Ii
920 &pxor ($Xi,$T1); # Ii+Xi
922 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
923 &reduction_alg5 ($Xhi,$Xi);
925 &movdqa ($T3,&QWP(0,$const));
928 &movdqu (&QWP(0,$Xip),$Xi);
929 &function_end("gcm_ghash_clmul");
933 &set_label("bswap",64);
934 &data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
935 &data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2); # 0x1c2_polynomial
938 &set_label("rem_4bit",64);
939 &data_word(0,0x0000<<12,0,0x1C20<<12,0,0x3840<<12,0,0x2460<<12);
940 &data_word(0,0x7080<<12,0,0x6CA0<<12,0,0x48C0<<12,0,0x54E0<<12);
941 &data_word(0,0xE100<<12,0,0xFD20<<12,0,0xD940<<12,0,0xC560<<12);
942 &data_word(0,0x9180<<12,0,0x8DA0<<12,0,0xA9C0<<12,0,0xB5E0<<12);
945 &asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
948 # A question was risen about choice of vanilla MMX. Or rather why wasn't
949 # SSE2 chosen instead? In addition to the fact that MMX runs on legacy
950 # CPUs such as PIII, "4-bit" MMX version was observed to provide better
951 # performance than *corresponding* SSE2 one even on contemporary CPUs.
952 # SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
953 # implementation featuring full range of lookup-table sizes, but with
954 # per-invocation lookup table setup. Latter means that table size is
955 # chosen depending on how much data is to be hashed in every given call,
956 # more data - larger table. Best reported result for Core2 is ~4 cycles
957 # per processed byte out of 64KB block. Recall that this number accounts
958 # even for 64KB table setup overhead. As discussed in gcm128.c we choose
959 # to be more conservative in respect to lookup table sizes, but how
960 # do the results compare? As per table in the beginning, minimalistic
961 # MMX version delivers ~11 cycles on same platform. As also discussed in
962 # gcm128.c, next in line "8-bit Shoup's" method should deliver twice the
963 # performance of "4-bit" one. It should be also be noted that in SSE2
964 # case improvement can be "super-linear," i.e. more than twice, mostly
965 # because >>8 maps to single instruction on SSE2 register. This is
966 # unlike "4-bit" case when >>4 maps to same amount of instructions in
967 # both MMX and SSE2 cases. Bottom line is that switch to SSE2 is
968 # considered to be justifiable only in case we choose to implement