fix chatgpt 参数设定与说明

This commit is contained in:
ivan deng
2023-04-27 20:06:27 +08:00
parent a7e19354a6
commit adbf5edcf5
3 changed files with 16 additions and 16 deletions

View File

@@ -10,7 +10,7 @@
{ {
'name': 'Latest ChatGPT4 AI Center. GPT 4 for image, Dall-E Image.Multi Robot Support. Chat and Training', 'name': 'Latest ChatGPT4 AI Center. GPT 4 for image, Dall-E Image.Multi Robot Support. Chat and Training',
'version': '16.23.04.22', 'version': '16.23.04.27',
'author': 'Sunpop.cn', 'author': 'Sunpop.cn',
'company': 'Sunpop.cn', 'company': 'Sunpop.cn',
'maintainer': 'Sunpop.cn', 'maintainer': 'Sunpop.cn',

View File

@@ -24,7 +24,7 @@ msgid ""
" " " "
msgstr "" msgstr ""
"\n" "\n"
"避免使用常用词。聊天机器人会尝试避免在回复中使用频率较高的词汇,以提高回复的多样性和新颖度。用于控制聊天机器人回复中出现频率过高的词汇的惩罚程度。聊天机器人会尝试避免在回复中使用频率较高的词汇,以提高回复的多样性和新颖度。" "-2~2避免使用常用词。聊天机器人会尝试避免在回复中使用频率较高的词汇,以提高回复的多样性和新颖度。用于控制聊天机器人回复中出现频率过高的词汇的惩罚程度。聊天机器人会尝试避免在回复中使用频率较高的词汇,以提高回复的多样性和新颖度。"
#. module: app_chatgpt #. module: app_chatgpt
#: model:ir.model.fields,help:app_chatgpt.field_ai_robot__presence_penalty #: model:ir.model.fields,help:app_chatgpt.field_ai_robot__presence_penalty
@@ -35,7 +35,7 @@ msgid ""
" " " "
msgstr "" msgstr ""
"\n" "\n"
"避免使用重复词。越大模型就趋向于生成更新的话题,惩罚已经出现过的文本" "-2~2避免使用重复词。越大模型就趋向于生成更新的话题,惩罚已经出现过的文本"
#. module: app_chatgpt #. module: app_chatgpt
#: model:ir.model.fields,help:app_chatgpt.field_ai_robot__max_tokens #: model:ir.model.fields,help:app_chatgpt.field_ai_robot__max_tokens
@@ -62,7 +62,7 @@ msgid ""
" " " "
msgstr "" msgstr ""
"\n" "\n"
"控制回复的“新颖度”。值越高,聊天机器人生成的回复越不确定和随机,值越低,聊天机器人生成的回复会更加可预测和常规化。" "0~2控制回复的“新颖度”。值越高,聊天机器人生成的回复越不确定和随机,值越低,聊天机器人生成的回复会更加可预测和常规化。"
#. module: app_chatgpt #. module: app_chatgpt
#: model:ir.model.fields,help:app_chatgpt.field_ai_robot__sys_content #: model:ir.model.fields,help:app_chatgpt.field_ai_robot__sys_content
@@ -87,7 +87,7 @@ msgid ""
" Try adjusting temperature or Top P but not both\n" " Try adjusting temperature or Top P but not both\n"
" " " "
msgstr "" msgstr ""
"语言连贯性与 temperature " "0~1语言连贯性与 temperature "
"参数类似用于控制递归联想时的可选择概率。也是控制回复的“新颖度”。不同的是top_p控制的是回复中概率最高的几个可能性的累计概率之和值越小生成的回复越保守值越大生成的回复越新颖。" "参数类似用于控制递归联想时的可选择概率。也是控制回复的“新颖度”。不同的是top_p控制的是回复中概率最高的几个可能性的累计概率之和值越小生成的回复越保守值越大生成的回复越新颖。"
#. module: app_chatgpt #. module: app_chatgpt

View File

@@ -39,10 +39,10 @@ GPT-3 A set of models that can understand and generate natural language
openapi_api_key = fields.Char(string="API Key", help="Provide the API key here") openapi_api_key = fields.Char(string="API Key", help="Provide the API key here")
# begin gpt 参数 # begin gpt 参数
# 1. stop表示聊天机器人停止生成回复的条件可以是一段文本或者一个列表当聊天机器人生成的回复中包含了这个条件就会停止继续生成回复。 # 1. stop表示聊天机器人停止生成回复的条件可以是一段文本或者一个列表当聊天机器人生成的回复中包含了这个条件就会停止继续生成回复。
# 2. temperature控制回复的“新颖度”值越高聊天机器人生成的回复越不确定和随机值越低聊天机器人生成的回复会更加可预测和常规化。 # 2. temperature0-2控制回复的“新颖度”,值越高,聊天机器人生成的回复越不确定和随机,值越低,聊天机器人生成的回复会更加可预测和常规化。
# 3. top_p语言连贯性与temperature有些类似也是控制回复的“新颖度”。不同的是top_p控制的是回复中概率最高的几个可能性的累计概率之和值越小生成的回复越保守值越大生成的回复越新颖。 # 3. top_p0-1语言连贯性与temperature有些类似也是控制回复的“新颖度”。不同的是top_p控制的是回复中概率最高的几个可能性的累计概率之和值越小生成的回复越保守值越大生成的回复越新颖。
# 4. frequency_penalty用于控制聊天机器人回复中出现频率过高的词汇的惩罚程度。聊天机器人会尝试避免在回复中使用频率较高的词汇以提高回复的多样性和新颖度。 # 4. frequency_penalty-2~2用于控制聊天机器人回复中出现频率过高的词汇的惩罚程度。聊天机器人会尝试避免在回复中使用频率较高的词汇,以提高回复的多样性和新颖度。
# 5. presence_penalty与frequency_penalty相对用于控制聊天机器人回复中出现频率较低的词汇的惩罚程度。聊天机器人会尝试在回复中使用频率较低的词汇以提高回复的多样性和新颖度。 # 5. presence_penalty-2~2与frequency_penalty相对用于控制聊天机器人回复中出现频率较低的词汇的惩罚程度。聊天机器人会尝试在回复中使用频率较低的词汇以提高回复的多样性和新颖度。
max_tokens = fields.Integer('Max response', default=600, max_tokens = fields.Integer('Max response', default=600,
help=""" help="""
Set a limit on the number of tokens per model response. Set a limit on the number of tokens per model response.
@@ -50,7 +50,7 @@ GPT-3 A set of models that can understand and generate natural language
(including system message, examples, message history, and user query) and the model's response. (including system message, examples, message history, and user query) and the model's response.
One token is roughly 4 characters for typical English text. One token is roughly 4 characters for typical English text.
""") """)
temperature = fields.Float(string='Temperature', default=0.8, temperature = fields.Float(string='Temperature', default=1,
help=""" help="""
Controls randomness. Lowering the temperature means that the model will produce Controls randomness. Lowering the temperature means that the model will produce
more repetitive and deterministic responses. more repetitive and deterministic responses.
@@ -65,13 +65,13 @@ GPT-3 A set of models that can understand and generate natural language
Try adjusting temperature or Top P but not both Try adjusting temperature or Top P but not both
""") """)
# 避免使用常用词 # 避免使用常用词
frequency_penalty = fields.Float('Frequency penalty', default=0.5, frequency_penalty = fields.Float('Frequency penalty', default=0.1,
help=""" help="""
Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far. Reduce the chance of repeating a token proportionally based on how often it has appeared in the text so far.
This decreases the likelihood of repeating the exact same text in a response. This decreases the likelihood of repeating the exact same text in a response.
""") """)
# 越大模型就趋向于生成更新的话题,惩罚已经出现过的文本 # 越大模型就趋向于生成更新的话题,惩罚已经出现过的文本
presence_penalty = fields.Float('Presence penalty', default=0.5, presence_penalty = fields.Float('Presence penalty', default=0.1,
help=""" help="""
Reduce the chance of repeating any token that has appeared in the text at all so far. Reduce the chance of repeating any token that has appeared in the text at all so far.
This increases the likelihood of introducing new topics in a response. This increases the likelihood of introducing new topics in a response.
@@ -313,11 +313,11 @@ GPT-3 A set of models that can understand and generate natural language
pdata = { pdata = {
"model": self.ai_model, "model": self.ai_model,
"prompt": data, "prompt": data,
"temperature": 0.8, "temperature": 1,
"max_tokens": max_tokens, "max_tokens": max_tokens,
"top_p": 1, "top_p": 0.6,
"frequency_penalty": 0.0, "frequency_penalty": 0.1,
"presence_penalty": 0.6, "presence_penalty": 0.1,
"stop": stop "stop": stop
} }
response = requests.post(o_url, data=json.dumps(pdata), headers=headers, timeout=R_TIMEOUT) response = requests.post(o_url, data=json.dumps(pdata), headers=headers, timeout=R_TIMEOUT)