From ebb26eac2948e02def3c7ac1ac23c4ecd345a5a7 Mon Sep 17 00:00:00 2001 From: "Danilo M." Date: Fri, 3 Apr 2026 18:17:29 +0200 Subject: repo: flatten layout — move packages to root, extras to .extras/ MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Move all packages from SlackBuilds/ to repo root - Move hooks/, docs/, nvchecker.toml to .extras/ - Update CLAUDE.md and README.md to reflect new structure Co-Authored-By: Claude Sonnet 4.6 --- SlackBuilds/llama.cpp-vulkan/slack-desc | 19 ------------------- 1 file changed, 19 deletions(-) delete mode 100644 SlackBuilds/llama.cpp-vulkan/slack-desc (limited to 'SlackBuilds/llama.cpp-vulkan/slack-desc') diff --git a/SlackBuilds/llama.cpp-vulkan/slack-desc b/SlackBuilds/llama.cpp-vulkan/slack-desc deleted file mode 100644 index 273e15e..0000000 --- a/SlackBuilds/llama.cpp-vulkan/slack-desc +++ /dev/null @@ -1,19 +0,0 @@ -# HOW TO EDIT THIS FILE: -# The "handy ruler" below makes it easier to edit a package description. -# Line up the first '|' above the ':' following the base package name, and -# the '|' on the right side marks the last column you can put a character in. -# You must make exactly 11 lines for the formatting to be correct. It's also -# customary to leave one space after the ':' except on otherwise blank lines. - - |-----handy-ruler------------------------------------------------------| -llama.cpp-vulkan: llama.cpp-vulkan (LLM inference in C/C++) -llama.cpp-vulkan: -llama.cpp-vulkan: Port of Facebook's LLaMA model in C/C++ with Vulkan GPU optimizations -llama.cpp-vulkan: -llama.cpp-vulkan: The main goal of llama.cpp is to enable LLM inference with minimal -llama.cpp-vulkan: setup and state-of-the-art performance on a wide range of hardware -llama.cpp-vulkan: locally and in the cloud. -llama.cpp-vulkan: -llama.cpp-vulkan: Home: https://github.com/ggml-org/llama.cpp -llama.cpp-vulkan: -llama.cpp-vulkan: -- cgit v1.2.3