π€ Deploy to Chatbot
Usage Steps:
Deploy AstrBot
Set up a messaging platform within AstrBot
Start the API service by running:
weclone-cli serverAdd a new service provider in AstrBot:
Type: OpenAI
API Base URL: Enter the correct endpoint depending on your AstrBot deployment method (e.g., for Docker:
http://172.17.0.1:8005/v1)Model:
gpt-3.5-turboAPI Key: Can be any value (placeholder)
Tool use is not supported after fine-tuning β make sure to disable tools by sending the command:
/tool off allin the messaging platform, or the fine-tuned responses may not work as expected.
Set the system prompt in AstrBot according to the
default_systemused during fine-tuning.

Important
Check the
api_servicelogs to ensure that:
The request parameters sent to the large language model service are consistent with those used during fine-tuning
All tool plugin functionalities are disabled, to avoid interference with the fine-tuned modelβs behavior.
Adjust sampling parameters, such as
temperature,top_p,top_k, etc., to customize your model's behavior.These parameters can be configured to fine-tune the response style and randomness of your model during inference.
LangBot
Deployment Steps:
Deploy LangBot
Add a bot within the LangBot platform
On the model page, add a new model:
Name:
gpt-3.5-turboProvider: Select OpenAI
Request URL: Use the API endpoint provided by WeClone
For detailed connection instructions, refer to the documentation
API Key: Can be any placeholder value (not validated)
In the pipeline configuration, select the model you just added, or modify the prompt configuration as needed.
Last updated