# Environment Setup

**Environment Setup**

1. **CUDA Installation** (Skip if already installed; version 12.4 or higher is required): *LLaMA Factory*<br>
2. **It is recommended to use `uv` to install dependencies** — it’s a very fast Python environment manager. After installing `uv`, you can create a new Python environment and install the dependencies using the following command (note: this does **not** include dependencies for audio cloning functionality):

```
git clone https://github.com/xming521/WeClone.git
cd WeClone
uv venv .venv --python=3.10
source .venv/bin/activate # windows下执行 .venv\Scripts\activate
uv pip install --group main -e . 
```

> Tip<br>
>
> If you want to fine-tune using the **latest models**, you need to manually install the latest version of **LLaMA Factory**:\
> At the same time, **other dependencies may also need to be updated**, such as `vllm`, `pytorch`, and `transformers`.
>
> ```
> uv pip install --upgrade git+https://github.com/hiyouga/LLaMA-Factory.git
> ```

3. **Copy the configuration file template and rename it to `settings.jsonc`**. All subsequent configuration changes should be made in this file.<br>

   ```
   cp settings.template.jsonc settings.jsonc
   ```

> Note\
> Training and inference-related configurations are all managed in the `settings.jsonc` file.

4. **Use the following command to test whether the CUDA environment is properly configured and recognized by PyTorch** (not required for Mac):

```
python -c "import torch; print('CUDA是否可用:', torch.cuda.is_available());"
```

5. *(Optional)* **Install FlashAttention** to accelerate training and inference:

```
uv pip install flash-attn --no-build-isolation
```
