LLM Worker (Large Language Model)

Type: LLM Worker

An LLM worker takes user input and sends it to a large language model such as GPT-3 to generate a response to the user input. An LLM worker can be created through the CLI.

Modes

The LLM worker can operate in two Modes. text and intent.

  • text mode will simply pass the users utterance to the LLM model and return the models response.
  • intent mode will use the users utterance and the bots configured intents to prompt the model to classify the utterance into one of the intents. The bot will then perform the actions associated with the intent.

OpenAI

The LLM worker can be configured to interact with OpenAI’s GPT-3 model. To configure an LLLM worker which uses OpenAI you must have an OpenAI account. Once logged in, you can navigate to here to generate an api key.

Once you create an api key to interact with OpenAI, keep note of it as it will be needed further on when configuring a ServisBOT LLM worker.

Importing an OpenAI Secret

  • You must create a secret using the OpenAI api key mentioned in the previous sections.
  • To do this with the cli do the following:
    • sb-cli secret create-interactive.
    • Enter the name of your secret.
    • Choose secret.
    • Paste your OpenAI api key when prompted for a secret value.
  • To do this using portal do the following:
    • Navigate to the Secrets Management section.
    • Click Create Secret.
    • Enter the alias for your secret.
    • Select Token Auth as your Secret Type Template.
    • Paste your api key into the field.
    • Click save.

OpenAI LLM Worker Configuration Text Mode

Once you have your OpenAI api key, you can begin creating an LLM Worker using the configuration below:

  • OpenAIApiKeySecretSrn: The secret will contain the OpenAI api key that will be used to communicate with OpenAI. See Creating an OpenAI Secret for more information on creating the secret.

  • ModelName (Optional): The OpenAI model you wish to use when interacting with OpenAI. The available models are as follows:

    • gpt-3.5-turbo-instruct
    • If not provided, the gpt-3.5-turbo-instruct model is used.
    • See here for more details about the available models.
  • Temperature (Optional): A number between 0 and 1 which is use to control how “creative” the model is when generating responses. The closer this number is to 0, the more predictive the responses are, the closer it is to 1, the more creative the responses may become. If not provided, 0 is used.

  • You can now create a secret using your OpenAI api key, and an LLM worker using the cli.

Creating an OpenAI LLM Worker Text Mode

Save the json below to a file and create a worker using the cli command sb-cli worker create file.json

{
  "Data": {
    "Configuration": {
      "OpenAIApiKeySecretSrn": "srn:vault::acme:secret:openai",
      "Temperature": 0.9,
      "ModelName": "gpt-3.5-turbo-instruct"
    },
    "Mode": "text",
    "Type": "OpenAI"
  },
  "Organization": "acme",
  "Config": {
    "Avatar": "default-bot"
  },
  "Enabled": true,
  "Description": "An LLM Worker",
  "Type": "llm-worker",
  "Status": "published"
}

Once you have created the worker using the CLI, it will return an ID for you. You need to then update/create a bot and place the worker at the top with the id and the type of llm-worker.

LLM Worker Text Mode Behavior

Due to the nature of GPT-3 models, and the fact they can generally respond to most user inputs, the llm-worker performs a bot mission done one it handles a message. This prevents an LLM based bot from becoming too greedy and not releasing control to other bots which may be required to make up the full user experience.

OpenAI LLM Worker Configuration Intent Mode

When using the LLM worker in intent mode, it is important that Intent names and descriptions are clear, concise and do not overlap with other intents. This will help the model understand the intent and provide a more accurate classification.

Once you have your OpenAI api key, you can begin creating an LLM Worker using the configuration below:

  • OpenAIApiKeySecretSrn: The secret will contain the OpenAI api key that will be used to communicate with OpenAI. See Creating an OpenAI Secret for more information on creating the secret.

  • ModelName (Optional): The OpenAI model you wish to use when interacting with OpenAI. The available models are as follows:

    • gpt-3.5-turbo
    • If not provided, the gpt-3.5-turbo model is used.
    • See here for more details about the available models.
  • Provider (Optional): The provider of the LLM, currently openai and azure is supported. Defaults to openai

  • SendDescriptions (Optional): If true, the worker will send the intent descriptions to help the model understand the intent. Defaults to true.

  • UtteranceContextLength (Optional): The number of turns in the conversation to include as context to the LLM. Defaults to 5, but will use less if there were not enough turns.

  • You can now create a secret using your OpenAI api key, and an LLM worker using the cli.

Creating an OpenAI LLM Worker Intent Mode

Save the json below to a file and create a worker using the cli command sb-cli worker create file.json

{
  "Data": {
    "Configuration": {
      "OpenAIApiKeySecretSrn": "srn:vault::acme:secret:openai",
      "UtteranceContextLength": 5,
      "SendDescriptions": true,
      "ModelName": "gpt-3.5-turbo"
    },
    "Type": "OpenAI",
    "Mode": "intent"
  },
  "Organization": "acme",
  "Config": {
    "Avatar": "default-bot"
  },
  "Enabled": true,
  "Description": "An LLM Worker",
  "Type": "llm-worker",
  "Name": "LLMWorker"
}

Once you have created the worker using the CLI, it will return an ID for you. You need to then update/create a bot and place the worker at the top with the id and the type of llm-worker.