Modern CPUs support a form of data-level parallelism, where arithmetic operations on fixed-size short vectors can be done on all components in parallel. This is known as single-instruction-multiple-data (SIMD).
SIMD support in the processor takes the form of instruction sets which operate on vector registers. By operating on multiple scalar values at the same time, code which operates on points, colors, and other vector data can be sped up.
In Factor, SIMD support is exposed in the form of special-purpose SIMD Sequence protocol implementations. These are fixed-length, homogeneous sequences. They are referred to as vectors, but should not be confused with Factor's Vectors, which can hold any type of object and can be resized.
The words in the math.vectors vocabulary, which can be used with any sequence of numbers, are special-cased by the compiler. If the compiler can prove that only SIMD vectors are used, it expands Vector operations into Low-level SIMD primitives. While in the general case, SIMD intrinsics operate on heap-allocated SIMD vectors, that too can be optimized since in many cases the compiler unbox SIMD vectors, storing them directly in registers.
Since the only difference between ordinary code and SIMD-accelerated code is that the latter uses special fixed-length SIMD sequences, the SIMD library is very easy to use. To ensure your code compiles to use vector instructions without boxing and unboxing overhead, follow the guidelines for Writing efficient SIMD code.
There should never be any reason to use Low-level SIMD primitives directly, but they too have a straightforward, but lower-level, interface.