WeCLONE
  • Introduction to WeCLONE
    • ✨ One-stop solution for creating your digital avatar from chat history
      • 📋 Features & Notes
        • Hardware Requirements
        • Environment Setup
        • Model Download
        • Data Preparation
        • Data Preprocessing
        • Configure Parameters and Fine-Tune the Model
      • 🤖 Deploy to Chatbot
  • Future Plans
    • 📌 Roadmap
    • ❤️ Code Contribution
    • ⚠️ Disclaimer
  • Socials
    • X
Powered by GitBook
On this page
  1. Introduction to WeCLONE
  2. ✨ One-stop solution for creating your digital avatar from chat history
  3. 📋 Features & Notes

Environment Setup

Environment Setup

  1. CUDA Installation (Skip if already installed; version 12.4 or higher is required): LLaMA Factory

  2. It is recommended to use uv to install dependencies — it’s a very fast Python environment manager. After installing uv, you can create a new Python environment and install the dependencies using the following command (note: this does not include dependencies for audio cloning functionality):

git clone https://github.com/xming521/WeClone.git
cd WeClone
uv venv .venv --python=3.10
source .venv/bin/activate # windows下执行 .venv\Scripts\activate
uv pip install --group main -e . 

Tip

If you want to fine-tune using the latest models, you need to manually install the latest version of LLaMA Factory: At the same time, other dependencies may also need to be updated, such as vllm, pytorch, and transformers.

uv pip install --upgrade git+https://github.com/hiyouga/LLaMA-Factory.git
  1. Copy the configuration file template and rename it to settings.jsonc. All subsequent configuration changes should be made in this file.

    cp settings.template.jsonc settings.jsonc

Note Training and inference-related configurations are all managed in the settings.jsonc file.

  1. Use the following command to test whether the CUDA environment is properly configured and recognized by PyTorch (not required for Mac):

python -c "import torch; print('CUDA是否可用:', torch.cuda.is_available());"
  1. (Optional) Install FlashAttention to accelerate training and inference:

uv pip install flash-attn --no-build-isolation
PreviousHardware RequirementsNextModel Download

Last updated 4 days ago