πŸ€– Deploy to Chatbot

Usage Steps:

  1. Deploy AstrBot

  2. Set up a messaging platform within AstrBot

  3. Start the API service by running:

weclone-cli server
  1. Add a new service provider in AstrBot:

  • Type: OpenAI

  • API Base URL: Enter the correct endpoint depending on your AstrBot deployment method (e.g., for Docker: http://172.17.0.1:8005/v1)

  • Model: gpt-3.5-turbo

  • API Key: Can be any value (placeholder)

  1. Tool use is not supported after fine-tuning – make sure to disable tools by sending the command:

/tool off all

in the messaging platform, or the fine-tuned responses may not work as expected.

  1. Set the system prompt in AstrBot according to the default_system used during fine-tuning.

Important

Check the api_service logs to ensure that:

  • The request parameters sent to the large language model service are consistent with those used during fine-tuning

  • All tool plugin functionalities are disabled, to avoid interference with the fine-tuned model’s behavior.

  1. Adjust sampling parameters, such as temperature, top_p, top_k, etc., to customize your model's behavior.

    These parameters can be configured to fine-tune the response style and randomness of your model during inference.

LangBot

Deployment Steps:

  1. Deploy LangBot

  2. Add a bot within the LangBot platform

  3. On the model page, add a new model:

    • Name: gpt-3.5-turbo

    • Provider: Select OpenAI

    • Request URL: Use the API endpoint provided by WeClone

    • For detailed connection instructions, refer to the documentation

    • API Key: Can be any placeholder value (not validated)

  4. In the pipeline configuration, select the model you just added, or modify the prompt configuration as needed.

Last updated