Changes
diff --git a/model-switcher/README.md b/model-switcher/README.md
new file mode 100644
index 0000000..ff28474
--- /dev/null
+++ b/model-switcher/README.md
@@ -0,0 +1,41 @@
+# LLM Model Switcher
+
+A simple web-based tool to switch between different `llama.cpp` model configurations on a headless AI server.
+
+## How it works
+
+The tool manages `llama.cpp` configurations by manipulating symbolic links and interacting with `systemd`:
+
+1. **Configurations**: It scans `/etc/llama.cpp.d` for `.conf` files. Each file should contain the environment variables for a specific model.
+2. **Active Link**: The systemd service for `llama.cpp` is expected to read its parameters from `/etc/llama.cpp.conf`.
+3. **Switching**: When you select a model in the web UI:
+ * The tool updates the symlink at `/etc/llama.cpp.conf` to point to the selected file in `/etc/llama.cpp.d`.
+ * It restarts the `llama.cpp` systemd service to apply the changes.
+
+## Requirements
+
+* Python 3.x (no external dependencies required).
+* `sudo` privileges (to modify `/etc` and restart system services).
+* A `systemd` service named `llama.cpp`.
+
+## Usage
+
+Run the script with `sudo`:
+
+```bash
+sudo python3 main.py
+```
+
+By default, the server will be available at `http://127.0.0.1:7330`.
+
+### Command Line Options
+
+* `--host`: Set the listening address (e.g., `0.0.0.0` for all interfaces). If a path is provided (e.g., `/tmp/model.sock`), it will listen on a Unix socket. (Default: `127.0.0.1`)
+* `--port`: Set the TCP port. (Default: `7330`)
+
+## Project Structure
+
+* `main.py`: The Python backend using `http.server`.
+* `static/`: Contains the frontend assets (HTML/CSS/JS).
+ * Uses **Preact** and **HTM** via ESM (no build step required).
+ * Plain CSS for styling.