3 # ====================================================================
4 # Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
5 # project. The module is, however, dual licensed under OpenSSL and
6 # CRYPTOGAMS licenses depending on where you obtain it. For further
7 # details see http://www.openssl.org/~appro/cryptogams/.
8 # ====================================================================
10 # March, May, June 2010
12 # The module implements "4-bit" GCM GHASH function and underlying
13 # single multiplication operation in GF(2^128). "4-bit" means that it
14 # uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
15 # code paths: vanilla x86 and vanilla MMX. Former will be executed on
16 # 486 and Pentium, latter on all others. MMX GHASH features so called
17 # "528B" variant of "4-bit" method utilizing additional 256+16 bytes
18 # of per-key storage [+512 bytes shared table]. Performance results
19 # are for streamed GHASH subroutine and are expressed in cycles per
20 # processed byte, less is better:
22 # gcc 2.95.3(*) MMX assembler x86 assembler
24 # Pentium 100/112(**) - 50
26 # P4 96 /122 18.0 84(***)
27 # Opteron 50 /71 10.1 30
30 # (*) gcc 3.4.x was observed to generate few percent slower code,
31 # which is one of reasons why 2.95.3 results were chosen,
32 # another reason is lack of 3.4.x results for older CPUs;
33 # (**) second number is result for code compiled with -fPIC flag,
34 # which is actually more relevant, because assembler code is
35 # position-independent;
36 # (***) see comment in non-MMX routine for further details;
38 # To summarize, it's >2-5 times faster than gcc-generated code. To
39 # anchor it to something else SHA1 assembler processes one byte in
40 # 11-13 cycles on contemporary x86 cores. As for choice of MMX in
41 # particular, see comment at the end of the file...
45 # Add PCLMULQDQ version performing at 2.13 cycles per processed byte.
46 # The question is how close is it to theoretical limit? The pclmulqdq
47 # instruction latency appears to be 14 cycles and there can't be more
48 # than 2 of them executing at any given time. This means that single
49 # Karatsuba multiplication would take 28 cycles *plus* few cycles for
50 # pre- and post-processing. Then multiplication has to be followed by
51 # modulo-reduction. Given that aggregated reduction method [see
52 # "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
53 # white paper by Intel] allows you to perform reduction only once in
54 # a while we can assume that asymptotic performance can be estimated
55 # as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
56 # and Naggr is the aggregation factor.
58 # Before we proceed to this implementation let's have closer look at
59 # the best-performing code suggested by Intel in their white paper.
60 # By tracing inter-register dependencies Tmod is estimated as ~19
61 # cycles and Naggr is 4, resulting in 2.05 cycles per processed byte.
62 # As implied, this is quite optimistic estimate, because it does not
63 # account for Karatsuba pre- and post-processing, which for a single
64 # multiplication is ~5 cycles. Unfortunately Intel does not provide
65 # performance data for GHASH alone, only for fused GCM mode. But
66 # we can estimate it by subtracting CTR performance result provided
67 # in "AES Instruction Set" white paper: 3.54-1.38=2.16 cycles per
68 # processed byte or 5% off the estimate. It should be noted though
69 # that 3.54 is GCM result for 16KB block size, while 1.38 is CTR for
70 # 1KB block size, meaning that real number is likely to be a bit
71 # further from estimate.
73 # Moving on to the implementation in question. Tmod is estimated as
74 # ~13 cycles and Naggr is 2, giving asymptotic performance of ...
75 # 2.16. How is it possible that measured performance is better than
76 # optimistic theoretical estimate? There is one thing Intel failed
77 # to recognize. By fusing GHASH with CTR former's performance is
78 # really limited to above (Tmul + Tmod/Naggr) equation. But if GHASH
79 # procedure is detached, the modulo-reduction can be interleaved with
80 # Naggr-1 multiplications and under ideal conditions even disappear
81 # from the equation. So that optimistic theoretical estimate for this
82 # implementation is ... 28/16=1.75, and not 2.16. Well, it's probably
83 # way too optimistic, at least for such small Naggr. I'd argue that
84 # (28+Tproc/Naggr), where Tproc is time required for Karatsuba pre-
85 # and post-processing, is more realistic estimate. In this case it
86 # gives ... 1.91 cycles per processed byte. Or in other words,
87 # depending on how well we can interleave reduction and one of the
88 # two multiplications the performance should be betwen 1.91 and 2.16.
89 # As already mentioned, this implementation processes one byte [out
90 # of 1KB buffer] in 2.13 cycles, while x86_64 counterpart - in 2.07.
91 # x86_64 performance is better, because larger register bank allows
92 # to interleave reduction and multiplication better.
94 # Does it make sense to increase Naggr? To start with it's virtually
95 # impossible in 32-bit mode, because of limited register bank
96 # capacity. Otherwise improvement has to be weighed agiainst slower
97 # setup, as well as code size and complexity increase. As even
98 # optimistic estimate doesn't promise 30% performance improvement,
99 # there are currently no plans to increase Naggr.
101 # Special thanks to David Woodhouse <dwmw2@infradead.org> for
102 # providing access to a Westmere-based system on behalf of Intel
103 # Open Source Technology Centre.
105 $0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
106 push(@INC,"${dir}","${dir}../../perlasm");
109 &asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386");
112 for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
114 ($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
118 $unroll = 0; # Affects x86 loop. Folded loop performs ~7% worse
119 # than unrolled, which has to be weighted against
120 # 2.5x x86-specific code size reduction.
126 &mov ($Zhh,&DWP(4,$Htbl,$Zll));
127 &mov ($Zhl,&DWP(0,$Htbl,$Zll));
128 &mov ($Zlh,&DWP(12,$Htbl,$Zll));
129 &mov ($Zll,&DWP(8,$Htbl,$Zll));
130 &xor ($rem,$rem); # avoid partial register stalls on PIII
132 # shrd practically kills P4, 2.5x deterioration, but P4 has
133 # MMX code-path to execute. shrd runs tad faster [than twice
134 # the shifts, move's and or's] on pre-MMX Pentium (as well as
135 # PIII and Core2), *but* minimizes code size, spares register
136 # and thus allows to fold the loop...
140 &jmp (&label("x86_loop"));
141 &set_label("x86_loop",16);
142 for($i=1;$i<=2;$i++) {
143 &mov (&LB($rem),&LB($Zll));
145 &and (&LB($rem),0xf);
149 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
151 &mov (&LB($rem),&BP($off,"esp",$cnt));
153 &and (&LB($rem),0xf0);
158 &xor ($Zll,&DWP(8,$Htbl,$rem));
159 &xor ($Zlh,&DWP(12,$Htbl,$rem));
160 &xor ($Zhl,&DWP(0,$Htbl,$rem));
161 &xor ($Zhh,&DWP(4,$Htbl,$rem));
165 &js (&label("x86_break"));
167 &jmp (&label("x86_loop"));
170 &set_label("x86_break",16);
172 for($i=1;$i<32;$i++) {
174 &mov (&LB($rem),&LB($Zll));
176 &and (&LB($rem),0xf);
180 &xor ($Zhh,&DWP($off+16,"esp",$rem,4));
183 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
184 &and (&LB($rem),0xf0);
186 &mov (&LB($rem),&BP($off+15-($i>>1),"esp"));
190 &xor ($Zll,&DWP(8,$Htbl,$rem));
191 &xor ($Zlh,&DWP(12,$Htbl,$rem));
192 &xor ($Zhl,&DWP(0,$Htbl,$rem));
193 &xor ($Zhh,&DWP(4,$Htbl,$rem));
209 &function_begin_B("_x86_gmult_4bit_inner");
212 &function_end_B("_x86_gmult_4bit_inner");
215 sub deposit_rem_4bit {
218 &mov (&DWP($bias+0, "esp"),0x0000<<16);
219 &mov (&DWP($bias+4, "esp"),0x1C20<<16);
220 &mov (&DWP($bias+8, "esp"),0x3840<<16);
221 &mov (&DWP($bias+12,"esp"),0x2460<<16);
222 &mov (&DWP($bias+16,"esp"),0x7080<<16);
223 &mov (&DWP($bias+20,"esp"),0x6CA0<<16);
224 &mov (&DWP($bias+24,"esp"),0x48C0<<16);
225 &mov (&DWP($bias+28,"esp"),0x54E0<<16);
226 &mov (&DWP($bias+32,"esp"),0xE100<<16);
227 &mov (&DWP($bias+36,"esp"),0xFD20<<16);
228 &mov (&DWP($bias+40,"esp"),0xD940<<16);
229 &mov (&DWP($bias+44,"esp"),0xC560<<16);
230 &mov (&DWP($bias+48,"esp"),0x9180<<16);
231 &mov (&DWP($bias+52,"esp"),0x8DA0<<16);
232 &mov (&DWP($bias+56,"esp"),0xA9C0<<16);
233 &mov (&DWP($bias+60,"esp"),0xB5E0<<16);
236 $suffix = $x86only ? "" : "_x86";
238 &function_begin("gcm_gmult_4bit".$suffix);
239 &stack_push(16+4+1); # +1 for stack alignment
240 &mov ($inp,&wparam(0)); # load Xi
241 &mov ($Htbl,&wparam(1)); # load Htable
243 &mov ($Zhh,&DWP(0,$inp)); # load Xi[16]
244 &mov ($Zhl,&DWP(4,$inp));
245 &mov ($Zlh,&DWP(8,$inp));
246 &mov ($Zll,&DWP(12,$inp));
248 &deposit_rem_4bit(16);
250 &mov (&DWP(0,"esp"),$Zhh); # copy Xi[16] on stack
251 &mov (&DWP(4,"esp"),$Zhl);
252 &mov (&DWP(8,"esp"),$Zlh);
253 &mov (&DWP(12,"esp"),$Zll);
258 &call ("_x86_gmult_4bit_inner");
261 &mov ($inp,&wparam(0));
264 &mov (&DWP(12,$inp),$Zll);
265 &mov (&DWP(8,$inp),$Zlh);
266 &mov (&DWP(4,$inp),$Zhl);
267 &mov (&DWP(0,$inp),$Zhh);
269 &function_end("gcm_gmult_4bit".$suffix);
271 &function_begin("gcm_ghash_4bit".$suffix);
272 &stack_push(16+4+1); # +1 for 64-bit alignment
273 &mov ($Zll,&wparam(0)); # load Xi
274 &mov ($Htbl,&wparam(1)); # load Htable
275 &mov ($inp,&wparam(2)); # load in
276 &mov ("ecx",&wparam(3)); # load len
278 &mov (&wparam(3),"ecx");
280 &mov ($Zhh,&DWP(0,$Zll)); # load Xi[16]
281 &mov ($Zhl,&DWP(4,$Zll));
282 &mov ($Zlh,&DWP(8,$Zll));
283 &mov ($Zll,&DWP(12,$Zll));
285 &deposit_rem_4bit(16);
287 &set_label("x86_outer_loop",16);
288 &xor ($Zll,&DWP(12,$inp)); # xor with input
289 &xor ($Zlh,&DWP(8,$inp));
290 &xor ($Zhl,&DWP(4,$inp));
291 &xor ($Zhh,&DWP(0,$inp));
292 &mov (&DWP(12,"esp"),$Zll); # dump it on stack
293 &mov (&DWP(8,"esp"),$Zlh);
294 &mov (&DWP(4,"esp"),$Zhl);
295 &mov (&DWP(0,"esp"),$Zhh);
301 &call ("_x86_gmult_4bit_inner");
304 &mov ($inp,&wparam(2));
306 &lea ($inp,&DWP(16,$inp));
307 &cmp ($inp,&wparam(3));
308 &mov (&wparam(2),$inp) if (!$unroll);
309 &jb (&label("x86_outer_loop"));
311 &mov ($inp,&wparam(0)); # load Xi
312 &mov (&DWP(12,$inp),$Zll);
313 &mov (&DWP(8,$inp),$Zlh);
314 &mov (&DWP(4,$inp),$Zhl);
315 &mov (&DWP(0,$inp),$Zhh);
317 &function_end("gcm_ghash_4bit".$suffix);
321 &static_label("rem_4bit");
323 if (0) {{ # "May" MMX version is kept for reference...
325 $S=12; # shift factor for rem_4bit
327 &function_begin_B("_mmx_gmult_4bit_inner");
328 # MMX version performs 3.5 times better on P4 (see comment in non-MMX
329 # routine for further details), 100% better on Opteron, ~70% better
330 # on Core2 and PIII... In other words effort is considered to be well
331 # spent... Since initial release the loop was unrolled in order to
332 # "liberate" register previously used as loop counter. Instead it's
333 # used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
334 # The path involves move of Z.lo from MMX to integer register,
335 # effective address calculation and finally merge of value to Z.hi.
336 # Reference to rem_4bit is scheduled so late that I had to >>4
337 # rem_4bit elements. This resulted in 20-45% procent improvement
338 # on contemporary µ-archs.
341 my $rem_4bit = "eax";
342 my @rem = ($Zhh,$Zll);
346 my ($Zlo,$Zhi) = ("mm0","mm1");
349 &xor ($nlo,$nlo); # avoid partial register stalls on PIII
351 &mov (&LB($nlo),&LB($nhi));
354 &movq ($Zlo,&QWP(8,$Htbl,$nlo));
355 &movq ($Zhi,&QWP(0,$Htbl,$nlo));
356 &movd ($rem[0],$Zlo);
358 for ($cnt=28;$cnt>=-2;$cnt--) {
360 my $nix = $odd ? $nlo : $nhi;
362 &shl (&LB($nlo),4) if ($odd);
366 &pxor ($Zlo,&QWP(8,$Htbl,$nix));
367 &mov (&LB($nlo),&BP($cnt/2,$inp)) if (!$odd && $cnt>=0);
369 &and ($nhi,0xf0) if ($odd);
370 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
372 &pxor ($Zhi,&QWP(0,$Htbl,$nix));
373 &mov ($nhi,$nlo) if (!$odd && $cnt>=0);
374 &movd ($rem[1],$Zlo);
377 push (@rem,shift(@rem)); # "rotate" registers
380 &mov ($inp,&DWP(4,$rem_4bit,$rem[1],8)); # last rem_4bit[rem]
382 &psrlq ($Zlo,32); # lower part of Zlo is already there
387 &shl ($inp,4); # compensate for rem_4bit[i] being >>4
397 &function_end_B("_mmx_gmult_4bit_inner");
399 &function_begin("gcm_gmult_4bit_mmx");
400 &mov ($inp,&wparam(0)); # load Xi
401 &mov ($Htbl,&wparam(1)); # load Htable
403 &call (&label("pic_point"));
404 &set_label("pic_point");
406 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
408 &movz ($Zll,&BP(15,$inp));
410 &call ("_mmx_gmult_4bit_inner");
412 &mov ($inp,&wparam(0)); # load Xi
414 &mov (&DWP(12,$inp),$Zll);
415 &mov (&DWP(4,$inp),$Zhl);
416 &mov (&DWP(8,$inp),$Zlh);
417 &mov (&DWP(0,$inp),$Zhh);
418 &function_end("gcm_gmult_4bit_mmx");
420 # Streamed version performs 20% better on P4, 7% on Opteron,
421 # 10% on Core2 and PIII...
422 &function_begin("gcm_ghash_4bit_mmx");
423 &mov ($Zhh,&wparam(0)); # load Xi
424 &mov ($Htbl,&wparam(1)); # load Htable
425 &mov ($inp,&wparam(2)); # load in
426 &mov ($Zlh,&wparam(3)); # load len
428 &call (&label("pic_point"));
429 &set_label("pic_point");
431 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
434 &mov (&wparam(3),$Zlh); # len to point at the end of input
435 &stack_push(4+1); # +1 for stack alignment
437 &mov ($Zll,&DWP(12,$Zhh)); # load Xi[16]
438 &mov ($Zhl,&DWP(4,$Zhh));
439 &mov ($Zlh,&DWP(8,$Zhh));
440 &mov ($Zhh,&DWP(0,$Zhh));
441 &jmp (&label("mmx_outer_loop"));
443 &set_label("mmx_outer_loop",16);
444 &xor ($Zll,&DWP(12,$inp));
445 &xor ($Zhl,&DWP(4,$inp));
446 &xor ($Zlh,&DWP(8,$inp));
447 &xor ($Zhh,&DWP(0,$inp));
448 &mov (&wparam(2),$inp);
449 &mov (&DWP(12,"esp"),$Zll);
450 &mov (&DWP(4,"esp"),$Zhl);
451 &mov (&DWP(8,"esp"),$Zlh);
452 &mov (&DWP(0,"esp"),$Zhh);
457 &call ("_mmx_gmult_4bit_inner");
459 &mov ($inp,&wparam(2));
460 &lea ($inp,&DWP(16,$inp));
461 &cmp ($inp,&wparam(3));
462 &jb (&label("mmx_outer_loop"));
464 &mov ($inp,&wparam(0)); # load Xi
466 &mov (&DWP(12,$inp),$Zll);
467 &mov (&DWP(4,$inp),$Zhl);
468 &mov (&DWP(8,$inp),$Zlh);
469 &mov (&DWP(0,$inp),$Zhh);
472 &function_end("gcm_ghash_4bit_mmx");
474 }} else {{ # "June" MMX version...
475 # ... has "April" gcm_gmult_4bit_mmx with folded loop.
476 # This is done to conserve code size...
477 $S=16; # shift factor for rem_4bit
480 # MMX version performs 2.8 times better on P4 (see comment in non-MMX
481 # routine for further details), 40% better on Opteron and Core2, 50%
482 # better on PIII... In other words effort is considered to be well
485 my $rem_4bit = shift;
491 my ($Zlo,$Zhi) = ("mm0","mm1");
494 &xor ($nlo,$nlo); # avoid partial register stalls on PIII
496 &mov (&LB($nlo),&LB($nhi));
500 &movq ($Zlo,&QWP(8,$Htbl,$nlo));
501 &movq ($Zhi,&QWP(0,$Htbl,$nlo));
503 &jmp (&label("mmx_loop"));
505 &set_label("mmx_loop",16);
510 &pxor ($Zlo,&QWP(8,$Htbl,$nhi));
511 &mov (&LB($nlo),&BP(0,$inp,$cnt));
513 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
516 &pxor ($Zhi,&QWP(0,$Htbl,$nhi));
519 &js (&label("mmx_break"));
527 &pxor ($Zlo,&QWP(8,$Htbl,$nlo));
529 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
531 &pxor ($Zhi,&QWP(0,$Htbl,$nlo));
533 &jmp (&label("mmx_loop"));
535 &set_label("mmx_break",16);
542 &pxor ($Zlo,&QWP(8,$Htbl,$nlo));
544 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
546 &pxor ($Zhi,&QWP(0,$Htbl,$nlo));
553 &pxor ($Zlo,&QWP(8,$Htbl,$nhi));
555 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8));
557 &pxor ($Zhi,&QWP(0,$Htbl,$nhi));
560 &psrlq ($Zlo,32); # lower part of Zlo is already there
572 &function_begin("gcm_gmult_4bit_mmx");
573 &mov ($inp,&wparam(0)); # load Xi
574 &mov ($Htbl,&wparam(1)); # load Htable
576 &call (&label("pic_point"));
577 &set_label("pic_point");
579 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
581 &movz ($Zll,&BP(15,$inp));
583 &mmx_loop($inp,"eax");
586 &mov (&DWP(12,$inp),$Zll);
587 &mov (&DWP(4,$inp),$Zhl);
588 &mov (&DWP(8,$inp),$Zlh);
589 &mov (&DWP(0,$inp),$Zhh);
590 &function_end("gcm_gmult_4bit_mmx");
592 ######################################################################
593 # Below subroutine is "528B" variant of "4-bit" GCM GHASH function
594 # (see gcm128.c for details). It provides further 20-40% performance
595 # improvement over *previous* version of this module.
597 &static_label("rem_8bit");
599 &function_begin("gcm_ghash_4bit_mmx");
600 { my ($Zlo,$Zhi) = ("mm7","mm6");
601 my $rem_8bit = "esi";
605 &mov ("eax",&wparam(0)); # Xi
606 &mov ("ebx",&wparam(1)); # Htable
607 &mov ("ecx",&wparam(2)); # inp
608 &mov ("edx",&wparam(3)); # len
609 &mov ("ebp","esp"); # original %esp
610 &call (&label("pic_point"));
611 &set_label ("pic_point");
612 &blindpop ($rem_8bit);
613 &lea ($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
615 &sub ("esp",512+16+16); # allocate stack frame...
616 &and ("esp",-64); # ...and align it
617 &sub ("esp",16); # place for (u8)(H[]<<4)
619 &add ("edx","ecx"); # pointer to the end of input
620 &mov (&DWP(528+16+0,"esp"),"eax"); # save Xi
621 &mov (&DWP(528+16+8,"esp"),"edx"); # save inp+len
622 &mov (&DWP(528+16+12,"esp"),"ebp"); # save original %esp
624 { my @lo = ("mm0","mm1","mm2");
625 my @hi = ("mm3","mm4","mm5");
626 my @tmp = ("mm6","mm7");
627 my $off1=0,$off2=0,$i;
629 &add ($Htbl,128); # optimize for size
630 &lea ("edi",&DWP(16+128,"esp"));
631 &lea ("ebp",&DWP(16+256+128,"esp"));
633 # decompose Htable (low and high parts are kept separately),
634 # generate Htable>>4, save to stack...
635 for ($i=0;$i<18;$i++) {
637 &mov ("edx",&DWP(16*$i+8-128,$Htbl)) if ($i<16);
638 &movq ($lo[0],&QWP(16*$i+8-128,$Htbl)) if ($i<16);
639 &psllq ($tmp[1],60) if ($i>1);
640 &movq ($hi[0],&QWP(16*$i+0-128,$Htbl)) if ($i<16);
641 &por ($lo[2],$tmp[1]) if ($i>1);
642 &movq (&QWP($off1-128,"edi"),$lo[1]) if ($i>0 && $i<17);
643 &psrlq ($lo[1],4) if ($i>0 && $i<17);
644 &movq (&QWP($off1,"edi"),$hi[1]) if ($i>0 && $i<17);
645 &movq ($tmp[0],$hi[1]) if ($i>0 && $i<17);
646 &movq (&QWP($off2-128,"ebp"),$lo[2]) if ($i>1);
647 &psrlq ($hi[1],4) if ($i>0 && $i<17);
648 &movq (&QWP($off2,"ebp"),$hi[2]) if ($i>1);
649 &shl ("edx",4) if ($i<16);
650 &mov (&BP($i,"esp"),&LB("edx")) if ($i<16);
652 unshift (@lo,pop(@lo)); # "rotate" registers
653 unshift (@hi,pop(@hi));
654 unshift (@tmp,pop(@tmp));
655 $off1 += 8 if ($i>0);
656 $off2 += 8 if ($i>1);
660 &movq ($Zhi,&QWP(0,"eax"));
661 &mov ("ebx",&DWP(8,"eax"));
662 &mov ("edx",&DWP(12,"eax")); # load Xi
664 &set_label("outer",16);
667 my @nhi = ("edi","ebp");
668 my @rem = ("ebx","ecx");
669 my @red = ("mm0","mm1","mm2");
672 &xor ($dat,&DWP(12,"ecx")); # merge input
673 &xor ("ebx",&DWP(8,"ecx"));
674 &pxor ($Zhi,&QWP(0,"ecx"));
675 &lea ("ecx",&DWP(16,"ecx")); # inp+=16
676 #&mov (&DWP(528+12,"esp"),$dat); # save inp^Xi
677 &mov (&DWP(528+8,"esp"),"ebx");
678 &movq (&QWP(528+0,"esp"),$Zhi);
679 &mov (&DWP(528+16+4,"esp"),"ecx"); # save inp
683 &mov (&LB($nlo),&LB($dat));
685 &and (&LB($nlo),0x0f);
687 &pxor ($red[0],$red[0]);
688 &rol ($dat,8); # next byte
689 &pxor ($red[1],$red[1]);
690 &pxor ($red[2],$red[2]);
692 # Just like in "May" verson modulo-schedule for critical path in
693 # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final xor
694 # is scheduled so late that rem_8bit is shifted *right* by 16,
695 # which is why last argument to pinsrw is 2, which corresponds to
697 for ($j=11,$i=0;$i<15;$i++) {
700 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo]
701 &rol ($dat,8); # next byte
702 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8));
705 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
706 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); # rem^H[nhi]<<4
708 &movq ($Zlo,&QWP(16,"esp",$nlo,8));
709 &movq ($Zhi,&QWP(16+128,"esp",$nlo,8));
712 &mov (&LB($nlo),&LB($dat));
713 &mov ($dat,&DWP(528+$j,"esp")) if (--$j%4==0);
715 &movd ($rem[0],$Zlo);
716 &movz ($rem[1],&LB($rem[1])) if ($i>0);
723 &pxor ($Zlo,&QWP(16+256+0,"esp",$nhi[1],8)); # Z^=H[nhi]>>4
724 &and (&LB($nlo),0x0f);
727 &pxor ($Zhi,$red[1]) if ($i>1);
729 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2) if ($i>0);
731 unshift (@red,pop(@red)); # "rotate" registers
732 unshift (@rem,pop(@rem));
733 unshift (@nhi,pop(@nhi));
736 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo]
737 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8));
738 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); #$rem[0]); # rem^H[nhi]<<4
741 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
742 &movz ($rem[1],&LB($rem[1]));
744 &pxor ($red[2],$red[2]); # clear 2nd word
747 &movd ($rem[0],$Zlo);
754 &pxor ($Zlo,&QWP(16,"esp",$nhi[1],8)); # Z^=H[nhi]
756 &movz ($rem[0],&LB($rem[0]));
759 &pxor ($Zhi,&QWP(16+128,"esp",$nhi[1],8));
761 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
762 &pxor ($Zhi,$red[1]);
765 &pinsrw ($red[2],&WP(0,$rem_8bit,$rem[0],2),3);
768 &pxor ($Zhi,$red[0]);
770 &pxor ($Zhi,$red[2]);
772 &mov ("ecx",&DWP(528+16+4,"esp")); # restore inp
774 &movq ($tmp,$Zhi); # 01234567
775 &psllw ($Zhi,8); # 1.3.5.7.
776 &psrlw ($tmp,8); # .0.2.4.6
777 &por ($Zhi,$tmp); # 10325476
779 &pshufw ($Zhi,$Zhi,0b00011011); # 76543210
782 &cmp ("ecx",&DWP(528+16+8,"esp")); # are we done?
783 &jne (&label("outer"));
786 &mov ("eax",&DWP(528+16+0,"esp")); # restore Xi
787 &mov (&DWP(12,"eax"),"edx");
788 &mov (&DWP(8,"eax"),"ebx");
789 &movq (&QWP(0,"eax"),$Zhi);
791 &mov ("esp",&DWP(528+16+12,"esp")); # restore original %esp
794 &function_end("gcm_ghash_4bit_mmx");
798 ######################################################################
807 ($Xi,$Xhi)=("xmm0","xmm1"); $Hkey="xmm2";
808 ($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
809 ($Xn,$Xhn)=("xmm6","xmm7");
811 &static_label("bswap");
813 sub clmul64x64_T2 { # minimal "register" pressure
814 my ($Xhi,$Xi,$Hkey)=@_;
816 &movdqa ($Xhi,$Xi); #
817 &pshufd ($T1,$Xi,0b01001110);
818 &pshufd ($T2,$Hkey,0b01001110);
822 &pclmulqdq ($Xi,$Hkey,0x00); #######
823 &pclmulqdq ($Xhi,$Hkey,0x11); #######
824 &pclmulqdq ($T1,$T2,0x00); #######
836 # Even though this subroutine offers visually better ILP, it
837 # was empirically found to be a tad slower than above version.
838 # At least in gcm_ghash_clmul context. But it's just as well,
839 # because loop modulo-scheduling is possible only thanks to
840 # minimized "register" pressure...
841 my ($Xhi,$Xi,$Hkey)=@_;
845 &pclmulqdq ($Xi,$Hkey,0x00); #######
846 &pclmulqdq ($Xhi,$Hkey,0x11); #######
847 &pshufd ($T2,$T1,0b01001110); #
848 &pshufd ($T3,$Hkey,0b01001110);
851 &pclmulqdq ($T2,$T3,0x00); #######
862 if (1) { # Algorithm 9 with <<1 twist.
863 # Reduction is shorter and uses only two
864 # temporary registers, which makes it better
865 # candidate for interleaving with 64x64
866 # multiplication. Pre-modulo-scheduled loop
867 # was found to be ~20% faster than Algorithm 5
868 # below. Algorithm 9 was therefore chosen for
869 # further optimization...
871 sub reduction_alg9 { # 17/13 times faster than Intel version
898 &function_begin_B("gcm_init_clmul");
899 &mov ($Htbl,&wparam(0));
900 &mov ($Xip,&wparam(1));
902 &call (&label("pic"));
905 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
907 &movdqu ($Hkey,&QWP(0,$Xip));
908 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
911 &pshufd ($T2,$Hkey,0b11111111); # broadcast uppermost dword
916 &pcmpgtd ($T3,$T2); # broadcast carry bit
918 &por ($Hkey,$T1); # H<<=1
921 &pand ($T3,&QWP(16,$const)); # 0x1c2_polynomial
922 &pxor ($Hkey,$T3); # if(carry) H^=0x1c2_polynomial
926 &clmul64x64_T2 ($Xhi,$Xi,$Hkey);
927 &reduction_alg9 ($Xhi,$Xi);
929 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
930 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
933 &function_end_B("gcm_init_clmul");
935 &function_begin_B("gcm_gmult_clmul");
936 &mov ($Xip,&wparam(0));
937 &mov ($Htbl,&wparam(1));
939 &call (&label("pic"));
942 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
944 &movdqu ($Xi,&QWP(0,$Xip));
945 &movdqa ($T3,&QWP(0,$const));
946 &movdqu ($Hkey,&QWP(0,$Htbl));
949 &clmul64x64_T2 ($Xhi,$Xi,$Hkey);
950 &reduction_alg9 ($Xhi,$Xi);
953 &movdqu (&QWP(0,$Xip),$Xi);
956 &function_end_B("gcm_gmult_clmul");
958 &function_begin("gcm_ghash_clmul");
959 &mov ($Xip,&wparam(0));
960 &mov ($Htbl,&wparam(1));
961 &mov ($inp,&wparam(2));
962 &mov ($len,&wparam(3));
964 &call (&label("pic"));
967 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
969 &movdqu ($Xi,&QWP(0,$Xip));
970 &movdqa ($T3,&QWP(0,$const));
971 &movdqu ($Hkey,&QWP(0,$Htbl));
975 &jz (&label("odd_tail"));
978 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
979 # [(H*Ii+1) + (H*Xi+1)] mod P =
980 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
982 &movdqu ($T1,&QWP(0,$inp)); # Ii
983 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
986 &pxor ($Xi,$T1); # Ii+Xi
988 &clmul64x64_T2 ($Xhn,$Xn,$Hkey); # H*Ii+1
989 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
991 &lea ($inp,&DWP(32,$inp)); # i+=2
993 &jbe (&label("even_tail"));
995 &set_label("mod_loop");
996 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
997 &movdqu ($T1,&QWP(0,$inp)); # Ii
998 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1000 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1003 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
1007 &movdqa ($T3,$Xn); #&clmul64x64_TX ($Xhn,$Xn,$Hkey); H*Ii+1
1009 &pxor ($Xhi,$T1); # "Ii+Xi", consume early
1011 &movdqa ($T1,$Xi) #&reduction_alg9($Xhi,$Xi); 1st phase
1016 &pclmulqdq ($Xn,$Hkey,0x00); #######
1018 &movdqa ($T2,$Xi); #
1022 &pshufd ($T1,$T3,0b01001110);
1025 &pshufd ($T3,$Hkey,0b01001110);
1026 &pxor ($T3,$Hkey); #
1028 &pclmulqdq ($Xhn,$Hkey,0x11); #######
1029 &movdqa ($T2,$Xi); # 2nd phase
1038 &pclmulqdq ($T1,$T3,0x00); #######
1039 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
1043 &movdqa ($T3,$T1); #
1048 &movdqa ($T3,&QWP(0,$const));
1050 &lea ($inp,&DWP(32,$inp));
1052 &ja (&label("mod_loop"));
1054 &set_label("even_tail");
1055 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
1057 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1060 &reduction_alg9 ($Xhi,$Xi);
1063 &jnz (&label("done"));
1065 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1066 &set_label("odd_tail");
1067 &movdqu ($T1,&QWP(0,$inp)); # Ii
1069 &pxor ($Xi,$T1); # Ii+Xi
1071 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
1072 &reduction_alg9 ($Xhi,$Xi);
1076 &movdqu (&QWP(0,$Xip),$Xi);
1077 &function_end("gcm_ghash_clmul");
1079 } else { # Algorith 5. Kept for reference purposes.
1081 sub reduction_alg5 { # 19/16 times faster than Intel version
1085 &movdqa ($T1,$Xi); #
1102 &movdqa ($T3,$Xi); #
1108 &movdqa ($T2,$T1); #
1126 &function_begin_B("gcm_init_clmul");
1127 &mov ($Htbl,&wparam(0));
1128 &mov ($Xip,&wparam(1));
1130 &call (&label("pic"));
1133 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1135 &movdqu ($Hkey,&QWP(0,$Xip));
1136 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap
1139 &movdqa ($Xi,$Hkey);
1140 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
1141 &reduction_alg5 ($Xhi,$Xi);
1143 &movdqu (&QWP(0,$Htbl),$Hkey); # save H
1144 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2
1147 &function_end_B("gcm_init_clmul");
1149 &function_begin_B("gcm_gmult_clmul");
1150 &mov ($Xip,&wparam(0));
1151 &mov ($Htbl,&wparam(1));
1153 &call (&label("pic"));
1156 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1158 &movdqu ($Xi,&QWP(0,$Xip));
1159 &movdqa ($Xn,&QWP(0,$const));
1160 &movdqu ($Hkey,&QWP(0,$Htbl));
1163 &clmul64x64_T3 ($Xhi,$Xi,$Hkey);
1164 &reduction_alg5 ($Xhi,$Xi);
1167 &movdqu (&QWP(0,$Xip),$Xi);
1170 &function_end_B("gcm_gmult_clmul");
1172 &function_begin("gcm_ghash_clmul");
1173 &mov ($Xip,&wparam(0));
1174 &mov ($Htbl,&wparam(1));
1175 &mov ($inp,&wparam(2));
1176 &mov ($len,&wparam(3));
1178 &call (&label("pic"));
1181 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1183 &movdqu ($Xi,&QWP(0,$Xip));
1184 &movdqa ($T3,&QWP(0,$const));
1185 &movdqu ($Hkey,&QWP(0,$Htbl));
1189 &jz (&label("odd_tail"));
1192 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1193 # [(H*Ii+1) + (H*Xi+1)] mod P =
1194 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P
1196 &movdqu ($T1,&QWP(0,$inp)); # Ii
1197 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
1200 &pxor ($Xi,$T1); # Ii+Xi
1202 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
1203 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
1206 &lea ($inp,&DWP(32,$inp)); # i+=2
1207 &jbe (&label("even_tail"));
1209 &set_label("mod_loop");
1210 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
1211 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1213 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1216 &reduction_alg5 ($Xhi,$Xi);
1219 &movdqa ($T3,&QWP(0,$const));
1220 &movdqu ($T1,&QWP(0,$inp)); # Ii
1221 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1
1224 &pxor ($Xi,$T1); # Ii+Xi
1226 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1
1227 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2
1230 &lea ($inp,&DWP(32,$inp));
1231 &ja (&label("mod_loop"));
1233 &set_label("even_tail");
1234 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi)
1236 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi)
1239 &reduction_alg5 ($Xhi,$Xi);
1241 &movdqa ($T3,&QWP(0,$const));
1243 &jnz (&label("done"));
1245 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H
1246 &set_label("odd_tail");
1247 &movdqu ($T1,&QWP(0,$inp)); # Ii
1249 &pxor ($Xi,$T1); # Ii+Xi
1251 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi)
1252 &reduction_alg5 ($Xhi,$Xi);
1254 &movdqa ($T3,&QWP(0,$const));
1257 &movdqu (&QWP(0,$Xip),$Xi);
1258 &function_end("gcm_ghash_clmul");
1262 &set_label("bswap",64);
1263 &data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1264 &data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2); # 0x1c2_polynomial
1267 &set_label("rem_4bit",64);
1268 &data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1269 &data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1270 &data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1271 &data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1272 &set_label("rem_8bit",64);
1273 &data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1274 &data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1275 &data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1276 &data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1277 &data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1278 &data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1279 &data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1280 &data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1281 &data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1282 &data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1283 &data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1284 &data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1285 &data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1286 &data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1287 &data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1288 &data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1289 &data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1290 &data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1291 &data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1292 &data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1293 &data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1294 &data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1295 &data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1296 &data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1297 &data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1298 &data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1299 &data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1300 &data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1301 &data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1302 &data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1303 &data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1304 &data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1307 &asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1310 # A question was risen about choice of vanilla MMX. Or rather why wasn't
1311 # SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1312 # CPUs such as PIII, "4-bit" MMX version was observed to provide better
1313 # performance than *corresponding* SSE2 one even on contemporary CPUs.
1314 # SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1315 # implementation featuring full range of lookup-table sizes, but with
1316 # per-invocation lookup table setup. Latter means that table size is
1317 # chosen depending on how much data is to be hashed in every given call,
1318 # more data - larger table. Best reported result for Core2 is ~4 cycles
1319 # per processed byte out of 64KB block. Recall that this number accounts
1320 # even for 64KB table setup overhead. As discussed in gcm128.c we choose
1321 # to be more conservative in respect to lookup table sizes, but how
1322 # do the results compare? Minimalistic "256B" MMX version delivers ~11
1323 # cycles on same platform. As also discussed in gcm128.c, next in line
1324 # "8-bit Shoup's" method should deliver twice the performance of "4-bit"
1325 # one. It should be also be noted that in SSE2 case improvement can be
1326 # "super-linear," i.e. more than twice, mostly because >>8 maps to
1327 # single instruction on SSE2 register. This is unlike "4-bit" case when
1328 # >>4 maps to same amount of instructions in both MMX and SSE2 cases.
1329 # Bottom line is that switch to SSE2 is considered to be justifiable
1330 # only in case we choose to implement "8-bit" method...