aboutsummaryrefslogtreecommitdiffstats
path: root/SlackBuilds/llama.cpp-vulkan/README
diff options
context:
space:
mode:
Diffstat (limited to 'SlackBuilds/llama.cpp-vulkan/README')
-rw-r--r--SlackBuilds/llama.cpp-vulkan/README22
1 files changed, 22 insertions, 0 deletions
diff --git a/SlackBuilds/llama.cpp-vulkan/README b/SlackBuilds/llama.cpp-vulkan/README
new file mode 100644
index 0000000..5509d44
--- /dev/null
+++ b/SlackBuilds/llama.cpp-vulkan/README
@@ -0,0 +1,22 @@
+llama.cpp
+
+LLM inference in C/C++
+
+The main goal of llama.cpp is to enable LLM inference with minimal
+setup and state-of-the-art performance on a wide range of hardware
+locally and in the cloud.
+
+ - Plain C/C++ implementation without any dependencies
+ - Apple silicon is a first-class citizen - optimized via ARM NEON,
+ Accelerate and Metal frameworks
+ - AVX, AVX2, AVX512 and AMX support for x86 architectures
+ - RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V
+ architectures
+ - 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer
+ quantization for faster inference and reduced memory use
+ - Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for
+ AMD GPUs via HIP and Moore Threads GPUs via MUSA)
+ - Vulkan and SYCL backend support
+ - CPU+GPU hybrid inference to partially accelerate models larger than
+ the total VRAM capacity
+