aboutsummaryrefslogtreecommitdiffstats
path: root/llama.cpp-vulkan/README
diff options
context:
space:
mode:
Diffstat (limited to 'llama.cpp-vulkan/README')
-rw-r--r--llama.cpp-vulkan/README22
1 files changed, 0 insertions, 22 deletions
diff --git a/llama.cpp-vulkan/README b/llama.cpp-vulkan/README
deleted file mode 100644
index 5509d44..0000000
--- a/llama.cpp-vulkan/README
+++ /dev/null
@@ -1,22 +0,0 @@
-llama.cpp
-
-LLM inference in C/C++
-
-The main goal of llama.cpp is to enable LLM inference with minimal
-setup and state-of-the-art performance on a wide range of hardware
-locally and in the cloud.
-
- - Plain C/C++ implementation without any dependencies
- - Apple silicon is a first-class citizen - optimized via ARM NEON,
- Accelerate and Metal frameworks
- - AVX, AVX2, AVX512 and AMX support for x86 architectures
- - RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V
- architectures
- - 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer
- quantization for faster inference and reduced memory use
- - Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for
- AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- - Vulkan and SYCL backend support
- - CPU+GPU hybrid inference to partially accelerate models larger than
- the total VRAM capacity
-