Type: LLM Worker
An LLM worker takes user input and sends it to a large language model such as GPT-3 to generate a response to the user input. An LLM worker can be created through the CLI.
The LLM worker can be configured to interact with OpenAI’s GPT-3 model. To configure an LLLM worker which uses OpenAI you must have an OpenAI account. Once logged in, you can navigate to here to generate an api key.
Once you create an api key to interact with OpenAI, keep note of it as it will be needed further on when configuring a ServisBOT LLM worker.
Once you have your OpenAI api key, you can begin creating an LLM Worker using the configuration below:
OpenAIApiKeySecretSrn: The secret will contain the OpenAI api key that will be used to communicate with OpenAI. See Creating an OpenAI Secret for more information on creating the secret.
ModelName (Optional): The OpenAI model you wish to use when interacting with OpenAI. The available models are as follows:
text-davinci-003
text-curie-001
text-babbage-001
text-ada-001
text-davinci-003
model is used.Temperature (Optional): A number between 0 and 1 which is use to control how “creative” the model is when generating responses. The closer this number is to 0, the more predictive the responses are, the closer it is to 1, the more creative the responses may become. If not provided, 0 is used.
You can now create a secret using your OpenAI api key, and an LLM worker using the cli.
Save the json below to a file and create a worker using the cli command sb-cli worker create file.json
{
"Data": {
"Configuration": {
"OpenAIApiKeySecretSrn": "srn:vault::acme:secret:openai",
"Temperature": 0.9,
"ModelName": "text-davinci-003"
},
"Type": "OpenAI"
},
"Organization": "acme",
"Config": {
"Avatar": "default-bot"
},
"Enabled": true,
"Description": "An LLM Worker",
"Type": "llm-worker",
"Status": "published"
}
Once you have created the worker using the CLI, it will return an ID for you. You need to then update/create a bot and place the worker at the top with the id and the type of llm-worker.
Due to the nature of GPT-3 models, and the fact they can generally respond to most user inputs, the llm-worker performs a bot mission done
one it handles a message. This prevents an LLM based bot from becoming too greedy and not releasing control to other bots which may be required to make up the full user experience.