Set up and configure an AI assistant (or chat) cog for your server with one of OpenAI's ChatGPT language models.
Features include configurable prompt injection, dynamic embeddings, custom function calling, and more!
- [p]assistant: base command for setting up the assistant
- [p]chat: talk with the assistant
- [p]convostats: view a user's token usage/conversation message count for the channel
- [p]clearconvo: reset your conversation with the assistant in the channel
Generate an image with Dalle-3
Usage: /draw <prompt> [size] [quality] [style]
prompt:
(Required) What would you like to draw?
size:
(Optional) The size of the image to generate
quality:
(Optional) The quality of the image to generate
style:
(Optional) The style of the image to generate
Checks: Server Only
Get help using assistant
.chathelp
Chat with Autto!
Conversations are Per user Per channel, meaning a conversation you have in one channel will be kept in memory separately from another conversation in a separate channel
Optional Arguments
--outputfile <filename>
- uploads a file with the reply instead (no spaces)
--extract
- extracts code blocks from the reply
--last
- resends the last message of the conversation
Example
.chat write a python script that prints "Hello World!"
--outputfile hello.py
will output a file containing the whole response.--outputfile hello.py --extract
will output a file containing just the code blocks and send the rest as text.--extract
will send the code separately from the reply.chat <question>
ask, escribir, razgovor, discuter, plaudern, 채팅, charlar, baterpapo, and sohbet
1 per 6.0 seconds
server_only
Check the token and message count of yourself or another user's conversation for this channel
Conversations are Per user Per channel, meaning a conversation you have in one channel will be kept in memory separately from another conversation in a separate channel
Conversations are only stored in memory until the bot restarts or the cog reloads
.convostats [user]
server_only
Reset your conversation with the bot
This will clear all message history between you and the bot for this channel
.convoclear
clearconvo
server_only
Pop the last message from your conversation
.convopop
bot_has_server_permissions and server_only
Copy the conversation to another channel, thread, or forum
.convocopy <channel>
bot_has_server_permissions and server_only
Set a system prompt for this conversation!
This allows customization of assistant behavior on a per channel basis!
Check out This Guide for prompting help.
.convoprompt [prompt]
server_only
View the current transcript of a conversation
This is mainly here for moderation purposes
.convoshow [user=None] [channel=None]
GUILD_OWNER
showconvo
server_only
Fetch related embeddings according to the current topn setting along with their scores
You can use this to fine-tune the minimum relatedness for your assistant
.query <query>
Setup the assistant
You will need an api key from OpenAI to use ChatGPT and their other models.
.assistant
ADMIN
assist
server_only
Switch vision resolution between high and low for relevant GPT-4-Turbo models
.assistant resolution
Set the embedding inclusion amout
Top N is the amount of embeddings to include with the initial prompt
.assistant topn <top_n>
Import embeddings from an .xlsx file
Args:
overwrite (bool): overwrite embeddings with existing entry names
.assistant importexcel <overwrite>
Set the presence penalty for the model (-2.0 to 2.0)
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
.assistant presence <presence_penalty>
Toggle the assistant on or off
.assistant toggle
Set the conversation expiration time
Regardless of this number, the initial prompt and internal system message are always included,
this only applies to any conversation between the user and bot after that.
Set to 0 to store conversations indefinitely or until the bot restarts or cog is reloaded
.assistant maxtime <retention_seconds>
Set the frequency penalty for the model (-2.0 to 2.0)
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
.assistant frequency <frequency_penalty>
Enables use of the search_internet
function
Get your API key Here
.assistant braveapikey
BOT_OWNER
brave
Toggle whether GPT can call functions
.assistant functioncalls
usefunctions
Set the OpenAI model to use
NOTE
Specifying a model without it's identifier (like gpt-3.5-turbo
instead of gpt-3.5-turbo-0125
)
will sometimes lose the ability to call functions in parallel for some reason, this is an OpenAI issue.
.assistant model [model=None]
Restore the cog from a backup
.assistant restorecog
BOT_OWNER
Set the initial prompt for GPT to use
Check out This Guide for prompting help.
Placeholders
.assistant prompt [prompt]
pre
Set the temperature for the model (0.0 - 2.0)
Closer to 0 is more concise and accurate while closer to 2 is more imaginative
.assistant temperature <temperature>
Set the max response tokens the model can respond with
Set to 0 for response tokens to be dynamic
.assistant maxresponsetokens <max_tokens>
Import embeddings to use with the assistant
Args:
overwrite (bool): overwrite embeddings with existing entry names
.assistant importjson <overwrite>
Set a channel specific system prompt
.assistant channelprompt [channel=None] [system_prompt]
Toggle the draw command on or off
.assistant toggledraw
drawtoggle
Set the OpenAI Embedding model to use
.assistant embedmodel [model=None]
Toggle whether failed regex blocks the assistant's reply
Some regexes can cause catastrophically backtracking
The bot can safely handle if this happens and will either continue on, or block the response.
.assistant regexfailblock
Toggle collaborative conversations
Multiple people speaking in a channel will be treated as a single conversation.
.assistant collab
Export embeddings to an .xlsx file
Note: csv exports do not include the embedding values
.assistant exportexcel
Export embeddings to a .csv file
Note: csv exports do not include the embedding values
.assistant exportcsv
Toggle persistent conversations
.assistant persist
BOT_OWNER
Reset the token usage stats for this server
.assistant resetusage
Refresh embedding weights
This command can be used when changing the embedding model
Embeddings that were created using OpenAI cannot be use with the self-hosted model and vice versa
.assistant refreshembeds
refreshembeddings, syncembeds, and syncembeddings
Add/Remove items from the tutor list.
If using OpenAI's function calling and talking to a tutor, the AI is able to create its own embeddings to remember later
role_or_member
can be a member or role
.assistant tutor <role_or_member>
tutors
Set the timezone used for prompt placeholders
.assistant timezone <timezone>
Set the maximum function calls allowed in a row
This sets how many times the model can call functions in a row
Only the following models can call functions at the moment
.assistant maxrecursion <recursion>
Wipe saved embeddings for all servers
This will delete any and all saved embedding training data for the assistant.
.assistant resetglobalembeddings <yes_or_no>
BOT_OWNER
Wipe all settings and data for entire cog
.assistant wipecog <confirm>
BOT_OWNER
Wipe saved conversations for the assistant in all servers
This will delete any and all saved conversations for the assistant.
.assistant resetglobalconversations <yes_or_no>
BOT_OWNER
Set the max messages for a conversation
Conversation retention is cached and gets reset when the bot restarts or the cog reloads.
Regardless of this number, the initial prompt and internal system message are always included,
this only applies to any conversation between the user and bot after that.
Set to 0 to disable conversation retention
Note: actual message count may exceed the max retention during an API call
.assistant maxretention <max_retention>
Wipe saved embeddings for the assistant
This will delete any and all saved embedding training data for the assistant.
.assistant resetembeddings <yes_or_no>
Toggle question mode
If question mode is on, embeddings will only be sourced during the first message of a conversation and messages that end in ?
.assistant questionmode
Toggle allowing per-conversation system prompt overriding
.assistant sysoverride
Toggle whether questions need to end with ?
.assistant questionmark
Wipe saved conversations for the assistant in this server
This will delete any and all saved conversations for the assistant.
.assistant resetconversations <yes_or_no>
Import embeddings to use with the assistant
Args:
overwrite (bool): overwrite embeddings with existing entry names
This will read excel files too
.assistant importcsv <overwrite>
Toggle whether the bot responds to mentions in any channel
.assistant mentionrespond
Set the minimum relatedness an embedding must be to include with the prompt
Relatedness threshold between 0 and 1 to include in embeddings during chat
Questions are converted to embeddings and compared against stored embeddings to pull the most relevant, this is the score that is derived from that comparison
Hint: The closer to 1 you get, the more deterministic and accurate the results may be, just don't be too strict or there wont be any results.
.assistant relatedness <mimimum_relatedness>
Set the max tokens that the bot will send to the model
Tips
Using more than the model can handle will raise exceptions.
.assistant maxtokens <max_tokens>
Toggle whether the assistant listens to other bots
NOT RECOMMENDED FOR PUBLIC BOTS!
.assistant listentobots
BOT_OWNER
botlisten and ignorebots
Set the channel for the assistant
.assistant channel [channel=None]
View current settings
To send in current channel, use .assistant view false
.assistant view [private=False]
v
Set your OpenAI key
.assistant openaikey
key
Add/Remove items from the blacklist
channel_role_member
can be a member, role, channel, or category channel
.assistant blacklist <channel_role_member>
Set the system prompt for GPT to use
Check out This Guide for prompting help.
Placeholders
.assistant system [system_prompt]
sys
Show the channel specific system prompt
.assistant channelpromptshow [channel=None]
Remove certain words/phrases in the bot's responses
.assistant regexblacklist <regex>
Export embeddings to a json file
.assistant exportjson
Override settings for specific roles
NOTE
If a user has two roles with override settings, override associated with the higher role will be used.
.assistant override
Assign a max retention time override to a role
Specify same role and time to remove the override
.assistant override maxtime <retention_seconds> <role>
Assign a max token override to a role
Specify same role and token count to remove the override
.assistant override maxtokens <max_tokens> <role>
Assign a role to use a model
Specify same role and model to remove the override
.assistant override model <model> <role>
Assign a max response token override to a role
Set to 0 for response tokens to be dynamic
Specify same role and token count to remove the override
.assistant override maxresponsetokens <max_tokens> <role>
Assign a max message retention override to a role
Specify same role and retention amount to remove the override
.assistant override maxretention <max_retention> <role>
Cycle between embedding methods
Dynamic embeddings mean that the embeddings pulled are dynamically appended to the initial prompt for each individual question.
When each time the user asks a question, the previous embedding is replaced with embeddings pulled from the current question, this reduces token usage significantly
Static embeddings are applied in front of each user message and get stored with the conversation instead of being replaced with each question.
Hybrid embeddings are a combination, with the first embedding being stored in the conversation and the rest being dynamic, this saves a bit on token usage.
User embeddings are injected into the beginning of the prompt as the first user message.
Dynamic embeddings are helpful for Q&A, but not so much for chat when you need to retain the context pulled from the embeddings. The hybrid method is a good middle ground
.assistant embedmethod
Take a backup of the cog
.assistant backupcog
BOT_OWNER
View the token usage stats for this server
.assistant usage
Make the model more deterministic by setting a seed for the model.
If specified, the system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
.assistant seed [seed=None]
Toggle whether to ping the user on replies
.assistant mention
set min character length for questions
Set to 0 to respond to anything
.assistant minlength <min_question_length>
Manage embeddings for training
Embeddings are used to optimize training of the assistant and minimize token usage.
By using this the bot can store vast amounts of contextual information without going over the token limit.
Note
You can enter a search query with this command to bring up the menu and go directly to that embedding selection.
.embeddings [query]
/embeddings [query]
ADMIN
emenu
server_only
Add custom function calls for Assistant to use
READ
The following objects are passed by default as keyword arguments.
*args, **kwargs
in the params and return a string# Can be either sync or async
async def func(*args, **kwargs) -> str:
Only bot owner can manage this, server owners can see descriptions but not code
.customfunctions [function_name=None]
/customfunctions [function_name=None]
customfunction and customfunc
server_only