Q1: Is the scalar functions in SLEEF faster than the corresponding functions in the standard C library?
A1: No. Todays standard C libraries are very well optimized, and there is small room for further optimization. The reason why SLEEF is fast is that it computes directly with SIMD registers and ALUs. This is not simple as it sounds, because conditional branches have to be eliminated in order to take full advantage of SIMD computation. If the algorithm requires conditional branches according to the argument, it must prepare for the cases where the elements in the input vector contain both values that would make a branch happen and not happen. This would spoil the advantage of SIMD computation, because each element in a vector would require a different code path.
Q2: Do the trigonometric functions (e.g. sin) in SLEEF return correct values for the whole range of inputs?
A2: Yes. SLEEF does implement a vectorized version of Payne Hanek range reduction, and all the trigonometric functions return a correct value with the specified accuracy.
Q3: What can I do to make sleef run faster?
A3: The most important thing is to choose the fastest available vector extension. SLEEF is optimized for computers with FMA instructions, and it runs slow on Ivy Bridge or older CPUs and Atom, that do not have FMA instructions. If you are not sure, use the dispatcher. The dispatcher in SLEEF is not slow. If you want to further speed up computation, try using LTO. By using LTO, the compiler fuses the code within the library to the code calling the library functions, and this sometimes results in considerable performance boost. In this case, you should not use the dispatcher, and you should use the same compiler with the same version to build SLEEF and the program against which SLEEF is linked.
Recent x86_64 gcc can auto-vectorize calls to functions. In order to utilize this functionality, OpenMP SIMD pragmas can be added to declarations of scalar functions like Sleef_sin_u10 by defining SLEEF_ENABLE_OMP_SIMD macro before including sleef.h on x86_64 computers. With these pragmas, gcc can use its auto-vectorizer to vectorize calls to these scalar functions. For example, the following code can be vectorized by gcc-10.
#include <stdio.h>
#define SLEEF_ENABLE_OMP_SIMD
#include "sleef.h"
#define N 65536
#define M (N + 3)
static double func(double x) { return Sleef_pow_u10(x, -x); }
double int_simpson(double a, double b) {
double h = (b - a) / M;
double sum_odd = 0.0, sum_even = 0.0;
for(int i = 1;i <= M-3;i += 2) {
sum_odd += func(a + h * i);
sum_even += func(a + h * (i + 1));
}
return h / 3 * (func(a) + 4 * sum_odd + 2 * sum_even + 4 * func(b - h) + func(b));
}
int main() {
double sum = 0;
for(int i=1;i<N;i++) sum += Sleef_pow_u10(i, -i);
printf("%g %g\n", int_simpson(0, 1), sum);
}
$ gcc-10 -fopenmp -ffast-math -mavx2 -O3 sophomore.c -lsleef -S -o- | grep _ZGV call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT call _ZGVdN4vv_Sleef_pow_u10@PLT $ █
Link time optimization (LTO) is a functionality implemented in gcc, clang and other compilers for optimizing across translation units (or source files.) This can sometimes dramatically improve the performance of the code, because it is capable of fusing library functions into the code calling those functions. The build system in SLEEF supports LTO and thus it can be built with LTO support by just specifying -DSLEEF_ENABLE_LTO=TRUE cmake option. However, there are a few things to note in order to get the optimal performance. 1. You should not use the dispatcher with LTO. Dispatchers prevent the functions from being fused with LTO. 2. You have to use the same compiler with the same version to build the library and your code. 3. You cannot build shared libraries with LTO.
Although LTO is considered to be a smart technique for improving the performance of the library functions, there are difficulties in using this functionality in real situations. One of the reasons is that people still need to use old compilers to build their projects. SLEEF can generate header files in which the library functions are all defined as inline functions. This can be compiled with old compilers. In theory, inline functions should give similar performance to LTO, but in reality, inline functions are better. In order to generate those header files, specify -DSLEEF_BUILD_INLINE_HEADERS=TRUE cmake option. Below is an example code utilizing the generated header files for SSE2 and AVX2. You cannot use a dispatcher with these header files.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <x86intrin.h>
#include <sleefinline_sse2.h>
#include <sleefinline_avx2128.h>
int main(int argc, char **argv) {
__m128d va = _mm_set_pd(2, 10);
__m128d vb = _mm_set_pd(3, 20);
__m128d vc = Sleef_powd2_u10sse2(va, vb);
double c[2];
_mm_storeu_pd(c, vc);
printf("%g, %gn", c[0], c[1]);
__m128d vd = Sleef_powd2_u10avx2128(va, vb);
double d[2];
_mm_storeu_pd(d, vd);
printf("%g, %gn", d[0], d[1]);
}
$ gcc-10 -ffp-contract=off -O3 -march=native helloinline.c -I./include $ ./a.out 1e+20, 8 1e+20, 8 $ nm -g a.out 00000000000036a0 R Sleef_rempitabdp 0000000000003020 R Sleef_rempitabsp 0000000000003000 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable 000000000000d010 D __TMC_END__ 000000000000d010 B __bss_start w __cxa_finalize@@GLIBC_2.2.5 000000000000d000 D __data_start 000000000000d008 D __dso_handle w __gmon_start__ 00000000000020a0 T __libc_csu_fini 0000000000002030 T __libc_csu_init U __libc_start_main@@GLIBC_2.2.5 U __printf_chk@@GLIBC_2.3.4 000000000000d010 D _edata 000000000000d018 B _end 00000000000020a8 T _fini 0000000000001f40 T _start 000000000000d000 W data_start 0000000000001060 T main $ █
Since Emscripten supports SSE2 intrinsics, the SSE2 inlinable function header can be used for WebAssembly.
#include <stdio.h>
#include <emmintrin.h>
#include "sleefinline_sse2.h"
int main(int argc, char **argv) {
double a[] = {2, 10};
double b[] = {3, 20};
__m128d va, vb, vc;
va = _mm_loadu_pd(a);
vb = _mm_loadu_pd(b);
vc = Sleef_powd2_u10sse2(va, vb);
double c[2];
_mm_storeu_pd(c, vc);
printf("pow(%g, %g) = %gn", a[0], b[0], c[0]);
printf("pow(%g, %g) = %gn", a[1], b[1], c[1]);
}
$ emcc -O3 -msimd128 -msse2 hellowasm.c $ ../node-v15.7.0-linux-x64/bin/node --experimental-wasm-simd ./a.out.js pow(2, 3) = 8 pow(10, 20) = 1e+20 $ █
SLEEF implements versions of functions that are implemented with each vector extension of the architecture. A dispatcher is a function that dynamically selects the fastest implementatation for the computer it runs. The dispatchers in SLEEF are designed to have very low overhead.
Fig. 7.1 shows a simplified code of our dispatcher. There is only one exported function mainFunc. When mainFunc is called for the first time, dispatcherMain is called internally, since funcPtr is initialized to the pointer to dispatcherMain (line 14). It then detects if the CPU supports SSE 4.1 (line 7), and rewrites funcPtr to a pointer to the function that utilizes SSE 4.1 or SSE 2, depending on the result of CPU feature detection (line 10). When mainFunc is called for the second time, it does not execute the dispatcherMain. It just executes the function pointed by the pointer stored in funcPtr during the execution of dispatcherMain.
There are advantages in our dispatcher. The first advantage is that it does not require any compiler-specific extension. The second advantage is simplicity. There are only 18 lines of simple code. Since the dispatchers are completely separated for each function, there is not much room for bugs to get in.
The third advantage is low overhead. You might think that the overhead is one function call including execution of the prologue and the epilogue. However, modern compilers are smart enough to eliminate redundant execution of the prologue, epilogue and return instruction. The actual overhead is just one jmp instruction, which has very small overhead since it is not conditional. This overhead is likely hidden by out-of-order execution.
The fourth advantage is thread safety. There is only one variable shared among threads, which is funcPtr. There are only two possible values for this pointer variable. The first value is the pointer to the dispatcherMain, and the second value is the pointer to either funcSSE2 or funcSSE4, depending on the availability of extensions. Once funcPtr is substituted with the pointer to funcSSE2 or funcSSE4, it will not be changed in the future. It should be easy to confirm that the code works in all the cases.
static double (*funcPtr)(double arg);
static double dispatcherMain(double arg) {
double (*p)(double arg) = funcSSE2;
#if the compiler supports SSE4.1
if (SSE4.1 is available on the CPU) p = funcSSE4;
#endif
funcPtr = p;
return (*funcPtr)(arg);
}
static double (*funcPtr)(double arg) = dispatcherMain;
double mainFunc(double arg) {
return (*funcPtr)(arg);
}
Fig. 7.1: Simplified code of our dispatcher
ULP stands for "unit in the last place", which is sometimes used for representing accuracy of calculation. 1 ULP is the distance between the two closest floating point number, which depends on the exponent of the FP number. The accuracy of calculation by reputable math libraries is usually between 0.5 and 1 ULP. Here, the accuracy means the largest error of calculation. SLEEF math library provides multiple accuracy choices for most of the math functions. Many functions have 3.5-ULP and 1-ULP versions, and 3.5-ULP versions are faster than 1-ULP versions. If you care more about execution speed than accuracy, it is advised to use the 3.5-ULP versions along with -ffast-math or "unsafe math optimization" options for the compiler.
Note that 3.5 ULPs of error is small enough in many applications. If you do not manage the error of computation by carefully ordering floating point operations in your code, you would easily have that amount of error in the computation results.
In IEEE 754 standard, underflow does not happen abruptly when the exponent becomes zero. Instead, when a number to be represented is smaller than a certain value, a denormal number is produced which has less precision. This is sometimes called gradual underflow. On some processor implementation, a flush-to-zero mode is used since it is easier to implement by hardware. In flush-to-zero mode, numbers smaller than the smallest normalized number are replaced with zero. FP operations are not IEEE-754 conformant if a flush-to-zero mode is used. A flush-to-zero mode influences the accuracy of calculation in some cases. The smallest normalized precision number can be referred with DBL_MIN for double precision, and FLT_MIN for single precision. The naming of these macros is a little bit confusing because DBL_MIN is not the smallest double precision number.
You can see known maximum errors in math functions in glibc at this page.
In order to evaluate a trigonometric function with a large argument, an argument reduction method is used to find an FP remainder of dividing the argument x by π. We devised a variation of the Payne-Hanek argument reduction method which is suitable for vector computation. Fig. 7.2 shows an explanatory source code for this method. See our paper for the details.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpfr.h>
typedef struct { double x, y; } double2;
double2 dd(double d) { double2 r = { d, 0 }; return r; }
int64_t d2i(double d) { union { double f; int64_t i; } tmp = {.f = d }; return tmp.i; }
double i2d(int64_t i) { union { double f; int64_t i; } tmp = {.i = i }; return tmp.f; }
double upper(double d) { return i2d(d2i(d) & 0xfffffffff8000000LL); }
double clearlsb(double d) { return i2d(d2i(d) & 0xfffffffffffffffeLL); }
double2 ddrenormalize(double2 t) {
double2 s = dd(t.x + t.y);
s.y = t.x - s.x + t.y;
return s;
}
double2 ddadd(double2 x, double2 y) {
double2 r = dd(x.x + y.x);
double v = r.x - x.x;
r.y = (x.x - (r.x - v)) + (y.x - v) + (x.y + y.y);
return r;
}
double2 ddmul(double x, double y) {
double2 r = dd(x * y);
r.y = fma(x, y, -r.x);
return r;
}
double2 ddmul2(double2 x, double2 y) {
double2 r = ddmul(x.x, y.x);
r.y += x.x * y.y + x.y * y.x;
return r;
}
// This function computes remainder(a, PI/2)
double2 modifiedPayneHanek(double a) {
double table[4];
int scale = fabs(a) > 1e+200 ? -128 : 0;
a = ldexp(a, scale);
// Table genration
mpfr_set_default_prec(2048);
mpfr_t pi, m;
mpfr_inits(pi, m, NULL);
mpfr_const_pi(pi, GMP_RNDN);
mpfr_d_div(m, 2, pi, GMP_RNDN);
mpfr_set_exp(m, mpfr_get_exp(m) + (ilogb(a) - 53 - scale));
mpfr_frac(m, m, GMP_RNDN);
mpfr_set_exp(m, mpfr_get_exp(m) - (ilogb(a) - 53));
for(int i=0;i<4;i++) {
table[i] = clearlsb(mpfr_get_d(m, GMP_RNDN));
mpfr_sub_d(m, m, table[i], GMP_RNDN);
}
mpfr_clears(pi, m, NULL);
// Main computation
double2 x = dd(0);
for(int i=0;i<4;i++) {
x = ddadd(x, ddmul(a, table[i]));
x.x = x.x - round(x.x);
x = ddrenormalize(x);
}
double2 pio2 = { 3.141592653589793*0.5, 1.2246467991473532e-16*0.5 };
x = ddmul2(x, pio2);
return fabs(a) < 0.785398163397448279 ? dd(a) : x;
}
Fig. 7.2: Explanatory source code for our modified Payne Hanek reduction method
It is a soup ladle. A sleef means a soup ladle in Dutch.