Compare commits

...

27 Commits

Author SHA1 Message Date
Rafi
f67ed7621c Multiple improvements for conversation 2023-03-30 21:45:23 +08:00
Rafi
97649e4bee pass 2023-03-30 18:12:55 +08:00
Rafi
1082da050b rm @vite-pwa/nuxt 2023-03-30 10:37:55 +08:00
Wong Saang
d89d1e288d Merge pull request #97 from erritis/main
Add title for prompt
2023-03-30 09:31:37 +08:00
Sergey Shekhovtsov
cd89d11d0b Add docker compose file for development
Added docker compose file for development

For convenience when developing
2023-03-28 22:36:53 +03:00
Erritis
cf0053a060 Add title for prompt
Added title for prompt

The title will simplify the search in the list of prompts
2023-03-28 22:35:12 +03:00
Wong Saang
019da4399e Merge pull request #93 from erritis/russian
Add Russian language support
2023-03-28 12:49:37 +08:00
Wong Saang
044961bb01 Merge pull request #90 from erritis/main
Add prompt fields to the translation section
2023-03-28 12:34:33 +08:00
Erritis
2374c81edb Add Russian language support 2023-03-28 04:02:56 +03:00
Erritis
699760713e Add prompt fields to the translation section
Added prompt fields to the translation section

Not all interface elements had the ability to add translation
2023-03-28 03:03:23 +03:00
Rafi
d75413cc49 update readme 2023-03-27 22:26:47 +08:00
Rafi
8175f199d2 Support GPT-4 2023-03-27 22:17:19 +08:00
Wong Saang
f8c2f396c1 Merge pull request #70 from Paramon/main
#50 add platform to docker-compose
2023-03-25 21:03:50 +08:00
Andrii Paramonov
8217647df8 #50 add platform 2023-03-24 16:44:00 +02:00
Rafi
288c9eeeca Change the functionality of custom API key to be controlled by the admin panel to determine whether it is enabled or not. 2023-03-24 15:43:01 +08:00
Rafi
4d09ff7c8a Added env var NUXT_PUBLIC_CUSTOM_API_KEY to docs 2023-03-24 14:32:38 +08:00
Rafi
5fa059017c Support controlling whether to enable the API Key setting module through environment variables. 2023-03-24 14:22:45 +08:00
Rafi
323f10844b Modify the placeholder of the default prompt for web search to solve the problem of not providing web search results to ChatGPT 2023-03-24 11:22:51 +08:00
Rafi
ee035390db add default prompt to web search 2023-03-23 18:48:08 +08:00
Wong Saang
be743bf799 Update README.md 2023-03-23 17:13:34 +08:00
Wong Saang
a59f84f2bf Update README.md 2023-03-23 17:12:59 +08:00
Rafi
ed0cf2997d update readme 2023-03-23 17:07:49 +08:00
Rafi
7f00c74097 Added wsgi-server environment variable EMAIL_FROM to docker-compose.yml and readme 2023-03-23 15:31:26 +08:00
Rafi
f007417fa4 add update info to readme 2023-03-23 12:53:08 +08:00
Rafi
27c5e2a3ac Get settings from backend, added web search functionality 2023-03-23 11:45:56 +08:00
Rafi
e90dc0c12b web_search toolbar 2023-03-22 23:29:58 +08:00
Rafi
837fd8c9ff update readme 2023-03-22 17:26:22 +08:00
21 changed files with 692 additions and 2343 deletions

View File

@@ -1,14 +1,29 @@
<p align="center"> <div align="center">
<img alt="demo" src="./demos/demo.gif?v=1"> <h1>ChatGPT UI</h1>
</p> </div>
[English](./README.md) | [中文](./docs/zh/README.md) [English](./README.md) | [中文](./docs/zh/README.md)
# ChatGPT UI
A ChatGPT web client that supports multiple users, multiple database connections for persistent data storage, supports i18n. Provides Docker images and quick deployment scripts. A ChatGPT web client that supports multiple users, multiple database connections for persistent data storage, supports i18n. Provides Docker images and quick deployment scripts.
https://user-images.githubusercontent.com/46235412/227156264-ca17ab17-999b-414f-ab06-3f75b5235bfe.mp4
## 📢Updates ## 📢Updates
<details open>
<summary><strong>2023-03-27</strong></summary>
🚀 Support gpt-4 model. You can select the model in the "Model Parameters" of the front-end.
The GPT-4 model requires whitelist access from OpenAI.
</details>
<details open>
<summary><strong>2023-03-23</strong></summary>
Added web search capability to generate more relevant and up-to-date answers from ChatGPT!
This feature is off by default, you can turn it on in `Chat->Settings` in the admin panel, there is a record `open_web_search` in Settings, set its value to True.
</details>
<details open> <details open>
<summary><strong>2023-03-15</strong></summary> <summary><strong>2023-03-15</strong></summary>
@@ -17,17 +32,7 @@ Add "open_registration" setting option in the admin panel to control whether use
</details> </details>
<details open> <details>
<summary><strong>2023-03-10</strong></summary>
Add 2 environment variables to control the typewriter effect:
- `NUXT_PUBLIC_TYPEWRITER=true` to enable/disable the typewriter effect
- `NUXT_PUBLIC_TYPEWRITER_DELAY=50` to set the delay time for each character in milliseconds.
</details>
<details open>
<summary><strong>2023-03-04</strong></summary> <summary><strong>2023-03-04</strong></summary>
**Update to the latest official chat model** `gpt-3.5-turbo` **Update to the latest official chat model** `gpt-3.5-turbo`
@@ -90,9 +95,6 @@ services:
image: wongsaang/chatgpt-ui-client:latest image: wongsaang/chatgpt-ui-client:latest
environment: environment:
- SERVER_DOMAIN=http://backend-web-server - SERVER_DOMAIN=http://backend-web-server
- NUXT_PUBLIC_APP_NAME='ChatGPT UI' # App name
- NUXT_PUBLIC_TYPEWRITER=true # Enable typewriter effect, default is false
- NUXT_PUBLIC_TYPEWRITER_DELAY=100 # Typewriter effect delay time, default is 50ms
depends_on: depends_on:
- backend-web-server - backend-web-server
ports: ports:
@@ -115,6 +117,7 @@ services:
# - EMAIL_HOST_USER= # - EMAIL_HOST_USER=
# - EMAIL_HOST_PASSWORD= # - EMAIL_HOST_PASSWORD=
# - EMAIL_USE_TLS=True # - EMAIL_USE_TLS=True
# - EMAIL_FROM=no-reply@example.com #Default sender email address
ports: ports:
- '8000:8000' - '8000:8000'
networks: networks:
@@ -156,6 +159,16 @@ Before you can start chatting, you need to add an OpenAI API key. In the Setting
Now you can access the web client at `http(s)://your.domain` or `http://123.123.123.123` to start chatting. Now you can access the web client at `http(s)://your.domain` or `http://123.123.123.123` to start chatting.
## Donation
> If it is helpful to you, it is also helping me.
If you want to support me, Buy me a coffee ❤️ [https://www.buymeacoffee.com/WongSaang](https://www.buymeacoffee.com/WongSaang)
<p align="center">
<img height="150" src="https://github.com/WongSaang/chatgpt-ui/blob/main/demos/bmc_qr.png?raw=true"/>
</p>
## Development ## Development
### Setup ### Setup

289
components/Conversation.vue Normal file
View File

@@ -0,0 +1,289 @@
<script setup>
import {EventStreamContentType, fetchEventSource} from '@microsoft/fetch-event-source'
import {addConversation} from "../utils/helper";
const { $i18n, $auth } = useNuxtApp()
const runtimeConfig = useRuntimeConfig()
const currentModel = useCurrentModel()
const openaiApiKey = useApiKey()
const fetchingResponse = ref(false)
const messageQueue = []
let isProcessingQueue = false
const props = defineProps({
conversation: {
type: Object,
required: true
}
})
const processMessageQueue = () => {
if (isProcessingQueue || messageQueue.length === 0) {
return
}
if (!props.conversation.messages[props.conversation.messages.length - 1].is_bot) {
props.conversation.messages.push({id: null, is_bot: true, message: ''})
}
isProcessingQueue = true
const nextMessage = messageQueue.shift()
if (runtimeConfig.public.typewriter) {
let wordIndex = 0;
const intervalId = setInterval(() => {
props.conversation.messages[props.conversation.messages.length - 1].message += nextMessage[wordIndex]
wordIndex++
if (wordIndex === nextMessage.length) {
clearInterval(intervalId)
isProcessingQueue = false
processMessageQueue()
}
}, runtimeConfig.public.typewriterDelay)
} else {
props.conversation.messages[props.conversation.messages.length - 1].message += nextMessage
isProcessingQueue = false
processMessageQueue()
}
}
let ctrl
const abortFetch = () => {
if (ctrl) {
ctrl.abort()
}
fetchingResponse.value = false
}
const fetchReply = async (message) => {
ctrl = new AbortController()
let webSearchParams = {}
if (enableWebSearch.value) {
webSearchParams['web_search'] = {
ua: navigator.userAgent,
default_prompt: $i18n.t('webSearchDefaultPrompt')
}
}
const data = Object.assign({}, currentModel.value, {
openaiApiKey: enableCustomApiKey.value ? openaiApiKey.value : null,
message: message,
conversationId: props.conversation.id
}, webSearchParams)
try {
await fetchEventSource('/api/conversation/', {
signal: ctrl.signal,
method: 'POST',
headers: {
'accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
onopen(response) {
if (response.ok && response.headers.get('content-type') === EventStreamContentType) {
return;
}
throw new Error(`Failed to send message. HTTP ${response.status} - ${response.statusText}`);
},
onclose() {
if (ctrl.signal.aborted === true) {
return;
}
throw new Error(`Failed to send message. Server closed the connection unexpectedly.`);
},
onerror(err) {
throw err;
},
async onmessage(message) {
const event = message.event
const data = JSON.parse(message.data)
if (event === 'error') {
abortFetch()
showSnackbar(data.error)
return;
}
if (event === 'userMessageId') {
props.conversation.messages[props.conversation.messages.length - 1].id = data.userMessageId
return;
}
if (event === 'done') {
if (props.conversation.id === null) {
props.conversation.id = data.conversationId
genTitle(props.conversation.id)
}
props.conversation.messages[props.conversation.messages.length - 1].id = data.messageId
abortFetch()
return;
}
messageQueue.push(data.content)
processMessageQueue()
scrollChatWindow()
},
})
} catch (err) {
console.log(err)
abortFetch()
showSnackbar(err.message)
}
}
const grab = ref(null)
const scrollChatWindow = () => {
if (grab.value === null) {
return;
}
grab.value.scrollIntoView({behavior: 'smooth'})
}
const checkOrAddConversation = () => {
if (props.conversation.messages.length === 0) {
props.conversation.messages.push({id: null, is_bot: true, message: ''})
}
}
const send = (message) => {
fetchingResponse.value = true
if (props.conversation.messages.length === 0) {
addConversation(props.conversation)
}
props.conversation.messages.push({message: message})
fetchReply(message)
scrollChatWindow()
}
const stop = () => {
abortFetch()
}
const snackbar = ref(false)
const snackbarText = ref('')
const showSnackbar = (text) => {
snackbarText.value = text
snackbar.value = true
}
const editor = ref(null)
const usePrompt = (prompt) => {
editor.value.usePrompt(prompt)
}
const deleteMessage = (index) => {
props.conversation.messages.splice(index, 1)
}
const showWebSearchToggle = ref(false)
const enableWebSearch = ref(false)
const enableCustomApiKey = ref(false)
const settings = useSettings()
watchEffect(() => {
if (settings.value) {
const settingsValue = toRaw(settings.value)
showWebSearchToggle.value = settingsValue.open_web_search && settingsValue.open_web_search === 'True'
enableCustomApiKey.value = settingsValue.open_api_key_setting && settingsValue.open_api_key_setting === 'True'
}
})
</script>
<template>
<div
v-if="conversation.loadingMessages"
class="text-center"
>
<v-progress-circular
indeterminate
color="primary"
></v-progress-circular>
</div>
<div v-else>
<div
v-if="conversation.messages.length > 0"
ref="chatWindow"
>
<v-container>
<v-row>
<v-col
v-for="(message, index) in conversation.messages" :key="index"
cols="12"
>
<div
class="d-flex align-center"
:class="message.is_bot ? 'justify-start' : 'justify-end'"
>
<MessageActions
v-if="!message.is_bot"
:message="message"
:message-index="index"
:use-prompt="usePrompt"
:delete-message="deleteMessage"
/>
<MsgContent :message="message" />
<MessageActions
v-if="message.is_bot"
:message="message"
:message-index="index"
:use-prompt="usePrompt"
:delete-message="deleteMessage"
/>
</div>
</v-col>
</v-row>
</v-container>
<div ref="grab" class="w-100" style="height: 200px;"></div>
</div>
<Welcome v-if="conversation.id === null && conversation.messages.length === 0" />
</div>
<v-footer app>
<div class="px-md-16 w-100 d-flex flex-column">
<div class="d-flex align-center">
<v-btn
v-show="fetchingResponse"
icon="close"
title="stop"
class="mr-3"
@click="stop"
></v-btn>
<MsgEditor ref="editor" :send-message="send" :disabled="fetchingResponse" :loading="fetchingResponse" />
</div>
<v-toolbar
density="comfortable"
color="transparent"
>
<Prompt v-show="!fetchingResponse" :use-prompt="usePrompt" />
<v-switch
v-if="showWebSearchToggle"
v-model="enableWebSearch"
hide-details
color="primary"
:label="$t('webSearch')"
></v-switch>
<v-spacer></v-spacer>
</v-toolbar>
</div>
</v-footer>
<v-snackbar
v-model="snackbar"
multi-line
location="top"
>
{{ snackbarText }}
<template v-slot:actions>
<v-btn
color="red"
variant="text"
@click="snackbar = false"
>
Close
</v-btn>
</template>
</v-snackbar>
</template>

View File

@@ -1,11 +1,15 @@
<script setup> <script setup>
const dialog = ref(false) const dialog = ref(false)
const currentModel = useCurrentModel() const currentModel = useCurrentModel()
const availableModels = [ const availableModels = [
DEFAULT_MODEL.name 'gpt-3.5-turbo',
'gpt-4'
] ]
const currentModelDefault = ref(MODELS[currentModel.value.name])
watch(currentModel, (newVal, oldVal) => { watch(currentModel, (newVal, oldVal) => {
currentModelDefault.value = MODELS[newVal.name]
saveCurrentModel(newVal) saveCurrentModel(newVal)
}, { deep: true }) }, { deep: true })
@@ -83,7 +87,7 @@ watch(currentModel, (newVal, oldVal) => {
single-line single-line
density="compact" density="compact"
type="number" type="number"
max="2048" :max="currentModelDefault.total_tokens"
step="1" step="1"
style="width: 100px" style="width: 100px"
class="flex-grow-0" class="flex-grow-0"
@@ -93,7 +97,7 @@ watch(currentModel, (newVal, oldVal) => {
<v-col cols="12"> <v-col cols="12">
<v-slider <v-slider
v-model="currentModel.max_tokens" v-model="currentModel.max_tokens"
:max="2048" :max="currentModelDefault.total_tokens"
:step="1" :step="1"
hide-details hide-details
> >

View File

@@ -2,6 +2,7 @@
const menu = ref(false) const menu = ref(false)
const prompts = ref([]) const prompts = ref([])
const editingPrompt = ref(null) const editingPrompt = ref(null)
const newTitlePrompt = ref(null)
const newPrompt = ref('') const newPrompt = ref('')
const submittingNewPrompt = ref(false) const submittingNewPrompt = ref(false)
const promptInputErrorMessage = ref('') const promptInputErrorMessage = ref('')
@@ -24,11 +25,13 @@ const addPrompt = async () => {
const { data, error } = await useAuthFetch('/api/chat/prompts/', { const { data, error } = await useAuthFetch('/api/chat/prompts/', {
method: 'POST', method: 'POST',
body: JSON.stringify({ body: JSON.stringify({
title: newTitlePrompt.value,
prompt: newPrompt.value prompt: newPrompt.value
}) })
}) })
if (!error.value) { if (!error.value) {
prompts.value.push(data.value) prompts.value.push(data.value)
newTitlePrompt.value = null
newPrompt.value = '' newPrompt.value = ''
} }
submittingNewPrompt.value = false submittingNewPrompt.value = false
@@ -43,6 +46,7 @@ const updatePrompt = async (index) => {
const { data, error } = await useAuthFetch(`/api/chat/prompts/${editingPrompt.value.id}/`, { const { data, error } = await useAuthFetch(`/api/chat/prompts/${editingPrompt.value.id}/`, {
method: 'PUT', method: 'PUT',
body: JSON.stringify({ body: JSON.stringify({
title: editingPrompt.value.title,
prompt: editingPrompt.value.prompt prompt: editingPrompt.value.prompt
}) })
}) })
@@ -96,10 +100,12 @@ onMounted( () => {
<template v-slot:activator="{ props }"> <template v-slot:activator="{ props }">
<v-btn <v-btn
v-bind="props" v-bind="props"
icon="speaker_notes" icon
title="Common prompts" >
class="mr-3" <v-icon
></v-btn> icon="speaker_notes"
></v-icon>
</v-btn>
</template> </template>
<v-container> <v-container>
@@ -108,7 +114,7 @@ onMounted( () => {
max-width="500" max-width="500"
> >
<v-card-title> <v-card-title>
<span class="headline">Frequently prompts</span> <span class="headline">{{ $t('frequentlyPrompts') }}</span>
</v-card-title> </v-card-title>
<v-divider></v-divider> <v-divider></v-divider>
@@ -125,35 +131,47 @@ onMounted( () => {
> >
<v-list-item <v-list-item
active-color="primary" active-color="primary"
rounded="xl"
v-if="editingPrompt && editingPrompt.id === prompt.id" v-if="editingPrompt && editingPrompt.id === prompt.id"
> >
<v-textarea <div class="d-flex flex-row" :style="{ marginTop: '5px' }">
rows="2" <div class="flex-grow-1">
v-model="editingPrompt.prompt" <v-text-field
:loading="editingPrompt.updating" v-model="editingPrompt.title"
variant="underlined" :loading="editingPrompt.updating"
hide-details :label="$t('titlePrompt')"
density="compact" variant="underlined"
> density="compact"
<template v-slot:append> hide-details
<div class="d-flex flex-column"> >
<v-btn </v-text-field>
icon="done" <v-textarea
variant="text" rows="2"
:loading="editingPrompt.updating" v-model="editingPrompt.prompt"
@click="updatePrompt(idx)" :loading="editingPrompt.updating"
> variant="underlined"
</v-btn> density="compact"
<v-btn hide-details
icon="close" >
variant="text" </v-textarea>
@click="cancelEditPrompt()" </div>
> <div>
</v-btn> <div class="d-flex flex-column">
</div> <v-btn
</template> icon="done"
</v-textarea> variant="text"
:loading="editingPrompt.updating"
@click="updatePrompt(idx)"
>
</v-btn>
<v-btn
icon="close"
variant="text"
@click="cancelEditPrompt()"
>
</v-btn>
</div>
</div>
</div>
</v-list-item> </v-list-item>
<v-list-item <v-list-item
v-if="!editingPrompt || editingPrompt.id !== prompt.id" v-if="!editingPrompt || editingPrompt.id !== prompt.id"
@@ -161,7 +179,7 @@ onMounted( () => {
active-color="primary" active-color="primary"
@click="selectPrompt(prompt)" @click="selectPrompt(prompt)"
> >
<v-list-item-title>{{ prompt.prompt }}</v-list-item-title> <v-list-item-title>{{ prompt.title ? prompt.title : prompt.prompt }}</v-list-item-title>
<template v-slot:append> <template v-slot:append>
<v-btn <v-btn
icon="edit" icon="edit"
@@ -182,6 +200,25 @@ onMounted( () => {
</v-list-item> </v-list-item>
</template> </template>
<v-list-item
active-color="primary"
>
<div
class="pt-3"
>
<v-text-field
rows="1"
v-model="newTitlePrompt"
:label="$t('titlePrompt')"
variant="outlined"
density="compact"
hide-details
clearable
>
</v-text-field>
</div>
</v-list-item>
<v-list-item <v-list-item
active-color="primary" active-color="primary"
> >
@@ -191,7 +228,7 @@ onMounted( () => {
<v-textarea <v-textarea
rows="2" rows="2"
v-model="newPrompt" v-model="newPrompt"
label="Add a new prompt" :label="$t('addNewPrompt')"
variant="outlined" variant="outlined"
density="compact" density="compact"
:error-messages="promptInputErrorMessage" :error-messages="promptInputErrorMessage"
@@ -209,7 +246,7 @@ onMounted( () => {
@click="addPrompt()" @click="addPrompt()"
> >
<v-icon icon="add"></v-icon> <v-icon icon="add"></v-icon>
Add prompt {{ $t('addPrompt') }}
</v-btn> </v-btn>
</v-list-item> </v-list-item>
</v-list> </v-list>

View File

@@ -1,10 +1,12 @@
export const useModels = () => useState('models', () => getStoredModels()) // export const useModels = () => useState('models', () => getStoredModels())
export const useCurrentModel = () => useState('currentModel', () => getCurrentModel()) export const useCurrentModel = () => useState('currentModel', () => getCurrentModel())
export const useApiKey = () => useState('apiKey', () => getStoredApiKey()) export const useApiKey = () => useState('apiKey', () => getStoredApiKey())
export const useConversion = () => useState('conversion', () => getDefaultConversionData()) export const useConversation = () => useState('conversation', () => getDefaultConversationData())
export const useConversions = () => useState('conversions', () => []) export const useConversations = () => useState('conversations', () => [])
export const useSettings = () => useState('settings', () => {})

BIN
demos/bmc_qr.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

BIN
demos/demo.mp4 Normal file

Binary file not shown.

16
docker-compose.dev.yml Normal file
View File

@@ -0,0 +1,16 @@
version: '3'
services:
client:
platform: linux/x86_64
build: .
environment:
SERVER_DOMAIN: http://web-server
ports:
- '${CLIENT_PORT:-8080}:80'
networks:
- chatgpt_network
restart: always
networks:
chatgpt_network:
external: True

View File

@@ -1,12 +1,10 @@
version: '3' version: '3'
services: services:
client: client:
platform: linux/x86_64
image: wongsaang/chatgpt-ui-client:latest image: wongsaang/chatgpt-ui-client:latest
environment: environment:
- SERVER_DOMAIN=http://backend-web-server - SERVER_DOMAIN=http://backend-web-server
- NUXT_PUBLIC_APP_NAME='ChatGPT UI'
- NUXT_PUBLIC_TYPEWRITER=true
- NUXT_PUBLIC_TYPEWRITER_DELAY=100
depends_on: depends_on:
- backend-web-server - backend-web-server
ports: ports:
@@ -15,6 +13,7 @@ services:
- chatgpt_ui_network - chatgpt_ui_network
restart: always restart: always
backend-wsgi-server: backend-wsgi-server:
platform: linux/x86_64
image: wongsaang/chatgpt-ui-wsgi-server:latest image: wongsaang/chatgpt-ui-wsgi-server:latest
environment: environment:
- APP_DOMAIN=${APP_DOMAIN:-localhost:9000} - APP_DOMAIN=${APP_DOMAIN:-localhost:9000}
@@ -29,12 +28,14 @@ services:
# - EMAIL_HOST_USER= # - EMAIL_HOST_USER=
# - EMAIL_HOST_PASSWORD= # - EMAIL_HOST_PASSWORD=
# - EMAIL_USE_TLS=True # - EMAIL_USE_TLS=True
# - EMAIL_FROM=no-reply@example.com #Default sender email address
ports: ports:
- '${WSGI_PORT:-8000}:8000' - '${WSGI_PORT:-8000}:8000'
networks: networks:
- chatgpt_ui_network - chatgpt_ui_network
restart: always restart: always
backend-web-server: backend-web-server:
platform: linux/x86_64
image: wongsaang/chatgpt-ui-web-server:latest image: wongsaang/chatgpt-ui-web-server:latest
environment: environment:
- BACKEND_URL=http://backend-wsgi-server:8000 - BACKEND_URL=http://backend-wsgi-server:8000

View File

@@ -1,14 +1,27 @@
<p align="center"> <div align="center">
<img alt="demo" src="../../demos/demo.gif?v=1"> <h1>ChatGPT UI</h1>
</p> </div>
[English](../../README.md) | [中文](./docs/zh/README.md) [English](../../README.md) | [中文](./docs/zh/README.md)
# ChatGPT UI
ChatGPT Web 客户端,支持多用户,支持 Mysql、PostgreSQL 等多种数据库连接进行数据持久化存储,支持多语言。提供 Docker 镜像和快速部署脚本。 ChatGPT Web 客户端,支持多用户,支持 Mysql、PostgreSQL 等多种数据库连接进行数据持久化存储,支持多语言。提供 Docker 镜像和快速部署脚本。
https://user-images.githubusercontent.com/46235412/227156264-ca17ab17-999b-414f-ab06-3f75b5235bfe.mp4
## 📢 更新 ## 📢 更新
<details open>
<summary><strong>2023-03-27</strong></summary>
🚀 支持 gpt-4 模型。你可以在前端的“模型参数”中选择模型gpt-4 模型需要通过 openai 的白名单才能使用。
</details>
<details open>
<summary><strong>2023-03-23</strong></summary>
增加网页搜索能力,使得 ChatGPT 生成的回答更与时俱进!
该功能默认处于关闭状态,你可以在管理后台的 `Chat->Settings` 中开启它,在 Settings 中有一个 `open_web_search` 的记录,把它的值设置为 True。
</details>
<details open> <details open>
<summary><strong>2023-03-15</strong></summary> <summary><strong>2023-03-15</strong></summary>
@@ -16,17 +29,7 @@ ChatGPT Web 客户端,支持多用户,支持 Mysql、PostgreSQL 等多种数
</details> </details>
<details open> <details>
<summary><strong>2023-03-10</strong></summary>
增加 2 个环境变量来控制打字机效果, 详见下方 docker-compose 配置的环境变量说明
- `NUXT_PUBLIC_TYPEWRITER` 是否开启打字机效果
- `NUXT_PUBLIC_TYPEWRITER_DELAY` 每个字的延迟时间,单位:毫秒
</details>
<details open>
<summary><strong>2023-03-04</strong></summary> <summary><strong>2023-03-04</strong></summary>
**使用最新的官方聊天模型** `gpt-3.5-turbo` **使用最新的官方聊天模型** `gpt-3.5-turbo`
@@ -88,9 +91,6 @@ services:
image: wongsaang/chatgpt-ui-client:latest image: wongsaang/chatgpt-ui-client:latest
environment: environment:
- SERVER_DOMAIN=http://backend-web-server - SERVER_DOMAIN=http://backend-web-server
- NUXT_PUBLIC_APP_NAME='ChatGPT UI' # App 名称,默认为 ChatGPT UI
- NUXT_PUBLIC_TYPEWRITER=true # 是否启用打字机效果,默认关闭
- NUXT_PUBLIC_TYPEWRITER_DELAY=100 # 打字机效果的延迟时间,默认 50毫秒
depends_on: depends_on:
- backend-web-server - backend-web-server
ports: ports:
@@ -113,6 +113,7 @@ services:
# - EMAIL_HOST_USER= # - EMAIL_HOST_USER=
# - EMAIL_HOST_PASSWORD= # - EMAIL_HOST_PASSWORD=
# - EMAIL_USE_TLS=True # - EMAIL_USE_TLS=True
# - EMAIL_FROM=no-reply@example.com #默认发件邮箱地址
ports: ports:
- '8000:8000' - '8000:8000'
networks: networks:
@@ -154,6 +155,16 @@ networks:
现在可以访问客户端地址 `http(s)://your.domain` / `http://123.123.123.123` 开始聊天。 现在可以访问客户端地址 `http(s)://your.domain` / `http://123.123.123.123` 开始聊天。
## 续杯咖啡
> 如果对您有帮助,也是在帮助我自己.
如果你想支持我,给我续杯咖啡吧 ❤️ [https://www.buymeacoffee.com/WongSaang](https://www.buymeacoffee.com/WongSaang)
<p align="center">
<img height="150" src="https://github.com/WongSaang/chatgpt-ui/blob/main/demos/bmc_qr.png?raw=true"/>
</p>
## Development ## Development
### Setup ### Setup

View File

@@ -10,6 +10,10 @@
"saveAndClose": "Save & Close", "saveAndClose": "Save & Close",
"pleaseSelectAtLeastOneModelDot": "Please select at least one model.", "pleaseSelectAtLeastOneModelDot": "Please select at least one model.",
"writeAMessage": "Write a message", "writeAMessage": "Write a message",
"frequentlyPrompts": "Frequently prompts",
"addPrompt": "Add prompt",
"titlePrompt": "Title",
"addNewPrompt": "Add a new prompt",
"pressEnterToSendYourMessageOrShiftEnterToAddANewLine": "Press Enter to send your message or Shift+Enter to add a new line", "pressEnterToSendYourMessageOrShiftEnterToAddANewLine": "Press Enter to send your message or Shift+Enter to add a new line",
"lightMode": "Light Mode", "lightMode": "Light Mode",
"darkMode": "Dark Mode", "darkMode": "Dark Mode",
@@ -17,6 +21,7 @@
"themeMode": "Theme Mode", "themeMode": "Theme Mode",
"feedback": "Feedback", "feedback": "Feedback",
"newConversation": "New conversation", "newConversation": "New conversation",
"defaultConversationTitle": "Unnamed",
"clearConversations": "Clear conversations", "clearConversations": "Clear conversations",
"modelParameters": "Model Parameters", "modelParameters": "Model Parameters",
"model": "Model", "model": "Model",
@@ -34,6 +39,9 @@
"copied": "Copied", "copied": "Copied",
"delete": "Delete", "delete": "Delete",
"signOut": "Sign out", "signOut": "Sign out",
"webSearch": "Web Search",
"webSearchDefaultPrompt": "Web search results:\n\n[web_results]\nCurrent date: [current_date]\n\nInstructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject.\nQuery: [query]",
"genTitlePrompt": "Generate a short title for the following content, no more than 10 words. \n\nContent: ",
"welcomeScreen": { "welcomeScreen": {
"introduction1": "is an unofficial client for ChatGPT, but uses the official OpenAI API.", "introduction1": "is an unofficial client for ChatGPT, but uses the official OpenAI API.",
"introduction2": "You will need an OpenAI API Key before you can use this client.", "introduction2": "You will need an OpenAI API Key before you can use this client.",

67
lang/ru-RU.json Normal file
View File

@@ -0,0 +1,67 @@
{
"welcomeTo": "Добро пожаловать в",
"language": "Язык",
"setApiKey": "Установить ключ API",
"setOpenAIApiKey": "Установить ключ API OpenAI",
"openAIApiKey": "Ключ API OpenAI",
"getAKey": "Получить ключ",
"openAIModels": "Модели OpenAI",
"aboutTheModels": "О моделях",
"saveAndClose": "Сохранить & Закрыть",
"pleaseSelectAtLeastOneModelDot": "Выберите хотя бы одну модель.",
"writeAMessage": "Напишите сообщение",
"frequentlyPrompts": "Список подсказок",
"addPrompt": "Добавить подсказку",
"titlePrompt": "Заголовок",
"addNewPrompt": "Добавитьте новую подсказку",
"pressEnterToSendYourMessageOrShiftEnterToAddANewLine": "Нажмите Enter, чтобы отправить сообщение, или Shift+Enter, чтобы добавить новую строку.",
"lightMode": "Светлая",
"darkMode": "Темная",
"followSystem": "Системная",
"themeMode": "Тема",
"feedback": "Обратная связь",
"newConversation": "Новый чат",
"defaultConversationTitle": "Безымянный",
"clearConversations": "Очистить чаты",
"modelParameters": "Параметры модели",
"model": "Модель",
"temperature": "Temperature",
"topP": "Top P",
"frequencyPenalty": "Frequency Penalty",
"presencePenalty": "Presence Penalty",
"maxTokens": "Max Tokens",
"roles": {
"me": "Я",
"ai": "AI"
},
"edit": "Редактировать",
"copy": "Копировать",
"copied": "Скопировано",
"delete": "Удалить",
"signOut": "Выход",
"webSearch": "Поиск в интернете",
"webSearchDefaultPrompt": "Результаты веб-поиска:\n\n[web_results]\nТекущая дата: [current_date]\n\nИнструкции: Используя предоставленные результаты веб-поиска, напишите развернутый ответ на заданный запрос. Обязательно цитируйте результаты, используя обозначение [[number](URL)] после ссылки. Если предоставленные результаты поиска относятся к нескольким темам с одинаковым названием, напишите отдельные ответы для каждой темы.\nЗапрос: [query]",
"genTitlePrompt": "Придумайте короткий заголовок для следующего содержания, не более 10 слов. \n\nСодержание: ",
"welcomeScreen": {
"introduction1": "является неофициальным клиентом для ChatGPT, но использует официальный API OpenAI.",
"introduction2": "Вам понадобится ключ API OpenAI, прежде чем вы сможете использовать этот клиент.",
"examples": {
"title": "Примеры",
"item1": "\"Объясни, что такое квантовые вычисления простыми словами\"",
"item2": "\"Предложи несколько креативных идей для дня рождения 10-летнего ребенка?\"",
"item3": "\"Как сделать HTTP-запрос в Javascript?\""
},
"capabilities": {
"title": "Возможности",
"item1": "Помнит, что пользователь сказал ранее в разговоре",
"item2": "Позволяет пользователю вносить последующие исправления",
"item3": "Научен отклонять неуместные запросы"
},
"limitations": {
"title": "Ограничения",
"item1": "Иногда может генерировать неверную информацию",
"item2": "Иногда может создавать вредные инструкции или предвзятый контент",
"item3": "Ограниченное знание мира и событий после 2021 года"
}
}
}

View File

@@ -10,6 +10,10 @@
"saveAndClose": "保存并关闭", "saveAndClose": "保存并关闭",
"pleaseSelectAtLeastOneModelDot": "请至少选择一个模型", "pleaseSelectAtLeastOneModelDot": "请至少选择一个模型",
"writeAMessage": "输入信息", "writeAMessage": "输入信息",
"frequentlyPrompts": "Frequently prompts",
"addPrompt": "Add prompt",
"titlePrompt": "Title",
"addNewPrompt": "Add a new prompt",
"pressEnterToSendYourMessageOrShiftEnterToAddANewLine": "按回车键发送您的信息或按Shift+Enter键添加新行", "pressEnterToSendYourMessageOrShiftEnterToAddANewLine": "按回车键发送您的信息或按Shift+Enter键添加新行",
"lightMode": "明亮模式", "lightMode": "明亮模式",
"darkMode": "暗色模式", "darkMode": "暗色模式",
@@ -17,6 +21,7 @@
"themeMode": "主题模式", "themeMode": "主题模式",
"feedback": "反馈", "feedback": "反馈",
"newConversation": "新的对话", "newConversation": "新的对话",
"defaultConversationTitle": "未命名",
"clearConversations": "清除对话", "clearConversations": "清除对话",
"modelParameters": "模型参数", "modelParameters": "模型参数",
"model": "模型", "model": "模型",
@@ -34,6 +39,9 @@
"copied": "已复制", "copied": "已复制",
"delete": "删除", "delete": "删除",
"signOut": "退出登录", "signOut": "退出登录",
"webSearch": "网页搜索",
"webSearchDefaultPrompt": "网络搜索结果:\n\n[web_results]\n当前日期[current_date]\n\n说明使用提供的网络搜索结果对给定的查询写出全面的回复。确保在引用参考文献后使用 [[number](URL)] 符号进行引用结果. 如果提供的搜索结果涉及到多个具有相同名称的主题,请针对每个主题编写单独的答案。\n查询[query]",
"genTitlePrompt": "为以下内容生成一个不超过10个字的简短标题。 \n\n内容: ",
"welcomeScreen": { "welcomeScreen": {
"introduction1": "是一个非官方的ChatGPT客户端但使用OpenAI的官方API", "introduction1": "是一个非官方的ChatGPT客户端但使用OpenAI的官方API",
"introduction2": "在使用本客户端之前您需要一个OpenAI API密钥。", "introduction2": "在使用本客户端之前您需要一个OpenAI API密钥。",

View File

@@ -22,8 +22,8 @@ const setLang = (lang) => {
setLocale(lang) setLocale(lang)
} }
const conversations = useConversions() const conversations = useConversations()
const currentConversation = useConversion() const currentConversation = useConversation()
const editingConversation = ref(null) const editingConversation = ref(null)
const deletingConversationIndex = ref(null) const deletingConversationIndex = ref(null)
@@ -54,7 +54,8 @@ const deleteConversation = async (index) => {
deletingConversationIndex.value = null deletingConversationIndex.value = null
if (!error.value) { if (!error.value) {
if (conversations.value[index].id === currentConversation.value.id) { if (conversations.value[index].id === currentConversation.value.id) {
createNewConversion() console.log('delete current conversation')
createNewConversation()
} }
conversations.value.splice(index, 1) conversations.value.splice(index, 1)
} }
@@ -78,12 +79,13 @@ const loadingConversations = ref(false)
const loadConversations = async () => { const loadConversations = async () => {
loadingConversations.value = true loadingConversations.value = true
conversations.value = await getConversions() conversations.value = await getConversations()
loadingConversations.value = false loadingConversations.value = false
} }
const {mdAndUp} = useDisplay() const {mdAndUp} = useDisplay()
const drawerPermanent = computed(() => { const drawerPermanent = computed(() => {
return mdAndUp.value return mdAndUp.value
}) })
@@ -97,8 +99,18 @@ const signOut = async () => {
} }
} }
onNuxtReady(async () => { const settings = useSettings()
const showApiKeySetting = ref(false)
watchEffect(() => {
if (settings.value) {
const settingsValue = toRaw(settings.value)
showApiKeySetting.value = settingsValue.open_api_key_setting && settingsValue.open_api_key_setting === 'True'
}
})
onMounted(async () => {
loadConversations() loadConversations()
loadSettings()
}) })
</script> </script>
@@ -119,8 +131,8 @@ onNuxtReady(async () => {
block block
variant="outlined" variant="outlined"
prepend-icon="add" prepend-icon="add"
@click="createNewConversion()"
class="text-none" class="text-none"
@click="createNewConversation"
> >
{{ $t('newConversation') }} {{ $t('newConversation') }}
</v-btn> </v-btn>
@@ -161,19 +173,19 @@ onNuxtReady(async () => {
<v-list-item <v-list-item
rounded="xl" rounded="xl"
active-color="primary" active-color="primary"
@click="openConversationMessages(conversation)" :to="conversation.id ? `/${conversation.id}` : undefined"
v-bind="props" v-bind="props"
> >
<v-list-item-title>{{ conversation.topic }}</v-list-item-title> <v-list-item-title>{{ conversation.topic }}</v-list-item-title>
<template v-slot:append> <template v-slot:append>
<div <div
v-show="isHovering" v-show="isHovering && conversation.id"
> >
<v-btn <v-btn
icon="edit" icon="edit"
size="small" size="small"
variant="text" variant="text"
@click.stop="editConversation(cIdx)" @click.prevent="editConversation(cIdx)"
> >
</v-btn> </v-btn>
<v-btn <v-btn
@@ -181,7 +193,7 @@ onNuxtReady(async () => {
size="small" size="small"
variant="text" variant="text"
:loading="deletingConversationIndex === cIdx" :loading="deletingConversationIndex === cIdx"
@click.stop="deleteConversation(cIdx)" @click.prevent="deleteConversation(cIdx)"
> >
</v-btn> </v-btn>
</div> </div>
@@ -237,6 +249,10 @@ onNuxtReady(async () => {
</v-card> </v-card>
</v-dialog> </v-dialog>
<ApiKeyDialog
v-if="showApiKeySetting"
/>
<ModelParameters/> <ModelParameters/>
<v-menu <v-menu
@@ -295,7 +311,7 @@ onNuxtReady(async () => {
<v-btn <v-btn
:title="$t('newConversation')" :title="$t('newConversation')"
icon="add" icon="add"
@click="createNewConversion()" @click="createNewConversation()"
></v-btn> ></v-btn>
</v-app-bar> </v-app-bar>

View File

@@ -1,5 +1,5 @@
// https://nuxt.com/docs/api/configuration/nuxt-config // https://nuxt.com/docs/api/configuration/nuxt-config
const appName = 'ChatGPT UI' const appName = process.env.NUXT_PUBLIC_APP_NAME ?? 'ChatGPT UI'
export default defineNuxtConfig({ export default defineNuxtConfig({
dev: false, dev: false,
@@ -14,6 +14,7 @@ export default defineNuxtConfig({
appName: appName, appName: appName,
typewriter: false, typewriter: false,
typewriterDelay: 50, typewriterDelay: 50,
customApiKey: false
} }
}, },
build: { build: {
@@ -53,6 +54,12 @@ export default defineNuxtConfig({
iso: 'zh-CN', iso: 'zh-CN',
name: '简体中文', name: '简体中文',
file: 'zh-CN.json', file: 'zh-CN.json',
},
{
code: 'ru',
iso: 'ru-RU',
name: 'Русский',
file: 'ru-RU.json',
} }
], ],
lazy: true, lazy: true,

View File

@@ -11,7 +11,6 @@
"@kevinmarrec/nuxt-pwa": "^0.17.0", "@kevinmarrec/nuxt-pwa": "^0.17.0",
"@nuxtjs/color-mode": "^3.2.0", "@nuxtjs/color-mode": "^3.2.0",
"@nuxtjs/i18n": "^8.0.0-beta.9", "@nuxtjs/i18n": "^8.0.0-beta.9",
"@vite-pwa/nuxt": "^0.0.7",
"material-design-icons-iconfont": "^6.7.0", "material-design-icons-iconfont": "^6.7.0",
"nuxt": "^3.2.0" "nuxt": "^3.2.0"
}, },

View File

@@ -1,234 +1,35 @@
<script setup> <script setup>
import Prompt from "~/components/Prompt.vue";
definePageMeta({ definePageMeta({
middleware: ["auth"] middleware: ["auth"],
path: '/:id?',
keepalive: true
}) })
import {EventStreamContentType, fetchEventSource} from '@microsoft/fetch-event-source' const route = useRoute()
import { nextTick } from 'vue' const currentConversation = useConversation()
import MessageActions from "~/components/MessageActions.vue"; const conversation = ref(getDefaultConversationData())
watchEffect(() => {
const { $i18n, $auth } = useNuxtApp() if (!route.params.id) {
const runtimeConfig = useRuntimeConfig() conversation.value = getDefaultConversationData()
const currentModel = useCurrentModel()
const openaiApiKey = useApiKey()
const fetchingResponse = ref(false)
const messageQueue = []
let isProcessingQueue = false
const processMessageQueue = () => {
if (isProcessingQueue || messageQueue.length === 0) {
return
} }
if (!currentConversation.value.messages[currentConversation.value.messages.length - 1].is_bot) { })
currentConversation.value.messages.push({id: null, is_bot: true, message: ''}) const loadMessage = async () => {
conversation.value.loadingMessages = true
const { data, error } = await useAuthFetch('/api/chat/messages/?conversationId=' + route.params.id)
if (!error.value) {
conversation.value.messages = data.value
} }
isProcessingQueue = true conversation.value.loadingMessages = false
const nextMessage = messageQueue.shift() }
if (runtimeConfig.public.typewriter) {
let wordIndex = 0; onMounted(async () => {
const intervalId = setInterval(() => { if (route.params.id) {
currentConversation.value.messages[currentConversation.value.messages.length - 1].message += nextMessage[wordIndex] conversation.value.id = parseInt(route.params.id)
wordIndex++ await loadMessage()
if (wordIndex === nextMessage.length) {
clearInterval(intervalId)
isProcessingQueue = false
processMessageQueue()
}
}, runtimeConfig.public.typewriterDelay)
} else {
currentConversation.value.messages[currentConversation.value.messages.length - 1].message += nextMessage
isProcessingQueue = false
processMessageQueue()
} }
} currentConversation.value = Object.assign({}, conversation.value)
})
let ctrl
const abortFetch = () => {
if (ctrl) {
ctrl.abort()
}
fetchingResponse.value = false
}
const fetchReply = async (message) => {
ctrl = new AbortController()
const data = Object.assign({}, currentModel.value, {
openaiApiKey: openaiApiKey.value,
message: message,
conversationId: currentConversation.value.id
})
try {
await fetchEventSource('/api/conversation/', {
signal: ctrl.signal,
method: 'POST',
headers: {
'accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
onopen(response) {
if (response.ok && response.headers.get('content-type') === EventStreamContentType) {
return;
}
throw new Error(`Failed to send message. HTTP ${response.status} - ${response.statusText}`);
},
onclose() {
if (ctrl.signal.aborted === true) {
return;
}
throw new Error(`Failed to send message. Server closed the connection unexpectedly.`);
},
onerror(err) {
throw err;
},
async onmessage(message) {
// console.log(message)
const event = message.event
const data = JSON.parse(message.data)
if (event === 'error') {
throw new Error(data.error);
}
if (event === 'userMessageId') {
currentConversation.value.messages[currentConversation.value.messages.length - 1].id = data.userMessageId
return;
}
if (event === 'done') {
if (currentConversation.value.id === null) {
currentConversation.value.id = data.conversationId
genTitle(currentConversation.value.id)
}
currentConversation.value.messages[currentConversation.value.messages.length - 1].id = data.messageId
abortFetch()
return;
}
messageQueue.push(data.content)
processMessageQueue()
scrollChatWindow()
},
})
} catch (err) {
console.log(err)
abortFetch()
showSnackbar(err.message)
}
}
const currentConversation = useConversion()
const grab = ref(null)
const scrollChatWindow = () => {
if (grab.value === null) {
return;
}
grab.value.scrollIntoView({behavior: 'smooth'})
}
const send = (message) => {
fetchingResponse.value = true
currentConversation.value.messages.push({message: message})
fetchReply(message)
scrollChatWindow()
}
const stop = () => {
abortFetch()
}
const snackbar = ref(false)
const snackbarText = ref('')
const showSnackbar = (text) => {
snackbarText.value = text
snackbar.value = true
}
const editor = ref(null)
const usePrompt = (prompt) => {
editor.value.usePrompt(prompt)
}
const deleteMessage = (index) => {
currentConversation.value.messages.splice(index, 1)
}
</script> </script>
<template> <template>
<div <Conversation :conversation="conversation" />
v-if="currentConversation.messages.length > 0"
ref="chatWindow"
>
<v-container>
<v-row>
<v-col
v-for="(message, index) in currentConversation.messages" :key="index"
cols="12"
>
<div
class="d-flex align-center"
:class="message.is_bot ? 'justify-start' : 'justify-end'"
>
<MessageActions
v-if="!message.is_bot"
:message="message"
:message-index="index"
:use-prompt="usePrompt"
:delete-message="deleteMessage"
/>
<MsgContent :message="message" />
<MessageActions
v-if="message.is_bot"
:message="message"
:message-index="index"
:use-prompt="usePrompt"
:delete-message="deleteMessage"
/>
</div>
</v-col>
</v-row>
</v-container>
<div ref="grab" class="w-100" style="height: 200px;"></div>
</div>
<Welcome v-else />
<v-footer app class="d-flex flex-column">
<div class="px-md-16 w-100 d-flex align-center">
<Prompt v-show="!fetchingResponse" :use-prompt="usePrompt" />
<v-btn
v-show="fetchingResponse"
icon="close"
title="stop"
class="mr-3"
@click="stop"
></v-btn>
<MsgEditor ref="editor" :send-message="send" :disabled="fetchingResponse" :loading="fetchingResponse" />
</div>
<div class="px-4 py-2 text-disabled text-caption font-weight-light text-center w-100">
© {{ new Date().getFullYear() }} {{ runtimeConfig.public.appName }}
</div>
</v-footer>
<v-snackbar
v-model="snackbar"
multi-line
location="top"
>
{{ snackbarText }}
<template v-slot:actions>
<v-btn
color="red"
variant="text"
@click="snackbar = false"
>
Close
</v-btn>
</template>
</v-snackbar>
</template> </template>

View File

@@ -5,11 +5,25 @@ export const STORAGE_KEY = {
OPENAI_API_KEY: 'openai_api_key', OPENAI_API_KEY: 'openai_api_key',
} }
export const DEFAULT_MODEL = { export const MODELS = {
name: 'gpt-3.5-turbo', 'gpt-3.5-turbo': {
frequency_penalty: 0.0, name: 'gpt-3.5-turbo',
presence_penalty: 0.0, frequency_penalty: 0.0,
max_tokens: 1000, presence_penalty: 0.0,
temperature: 0.7, total_tokens: 4096,
top_p: 1.0 max_tokens: 1000,
temperature: 0.7,
top_p: 1.0
},
'gpt-4': {
name: 'gpt-4',
frequency_penalty: 0.0,
presence_penalty: 0.0,
total_tokens: 8192,
max_tokens: 2000,
temperature: 0.7,
top_p: 1.0
}
} }
export const DEFAULT_MODEL_NAME = 'gpt-3.5-turbo'

View File

@@ -1,14 +1,15 @@
export const getDefaultConversionData = () => { export const getDefaultConversationData = () => {
const { $i18n } = useNuxtApp()
return { return {
id: null, id: null,
topic: null, topic: $i18n.t('defaultConversationTitle'),
messages: [], messages: [],
loadingMessages: false, loadingMessages: false,
} }
} }
export const getConversions = async () => { export const getConversations = async () => {
const { data, error } = await useAuthFetch('/api/chat/conversations/') const { data, error } = await useAuthFetch('/api/chat/conversations/')
if (!error.value) { if (!error.value) {
return data.value return data.value
@@ -16,38 +17,55 @@ export const getConversions = async () => {
return [] return []
} }
export const createNewConversion = () => { export const createNewConversation = () => {
const conversation = useConversion() const conversation = useConversation()
conversation.value = getDefaultConversionData() conversation.value = getDefaultConversationData()
navigateTo('/')
} }
export const openConversationMessages = async (currentConversation) => {
const conversation = useConversion() export const addConversation = (conversation) => {
conversation.value = Object.assign(conversation.value, currentConversation) const conversations = useConversations()
conversation.value.loadingMessages = true conversations.value = [conversation, ...conversations.value]
const { data, error } = await useAuthFetch('/api/chat/messages/?conversationId=' + currentConversation.id)
if (!error.value) {
conversation.value.messages = data.value
}
conversation.value.loadingMessages = true
} }
export const genTitle = async (conversationId) => { export const genTitle = async (conversationId) => {
const { $i18n } = useNuxtApp()
const { data, error } = await useAuthFetch('/api/gen_title/', { const { data, error } = await useAuthFetch('/api/gen_title/', {
method: 'POST', method: 'POST',
body: { body: {
conversationId: conversationId conversationId: conversationId,
prompt: $i18n.t('genTitlePrompt')
} }
}) })
if (!error.value) { if (!error.value) {
const conversation = { const conversations = useConversations()
id: conversationId, let index = conversations.value.findIndex(item => item.id === conversationId)
topic: data.value.title, if (index === -1) {
index = 0
} }
const conversations = useConversions() conversations.value[index].topic = data.value.title
// prepend to conversations
conversations.value = [conversation, ...conversations.value]
return data.value.title return data.value.title
} }
return null return null
} }
const transformData = (list) => {
const result = {};
for (let i = 0; i < list.length; i++) {
const item = list[i];
result[item.name] = item.value;
}
return result;
}
export const loadSettings = async () => {
const settings = useSettings()
const { data, error } = await useAuthFetch('/api/chat/settings/', {
method: 'GET'
})
if (!error.value) {
settings.value = transformData(data.value)
}
}

View File

@@ -1,3 +1,4 @@
import {MODELS} from "~/utils/enums";
const get = (key) => { const get = (key) => {
let val = localStorage.getItem(key) let val = localStorage.getItem(key)
@@ -17,13 +18,13 @@ export const setModels = (val) => {
models.value = val models.value = val
} }
export const getStoredModels = () => { // export const getStoredModels = () => {
let models = get(STORAGE_KEY.MODELS) // let models = get(STORAGE_KEY.MODELS)
if (!models) { // if (!models) {
models = [DEFAULT_MODEL] // models = [DEFAULT_MODEL]
} // }
return models // return models
} // }
export const saveCurrentModel = (val) => { export const saveCurrentModel = (val) => {
set(STORAGE_KEY.CURRENT_MODEL, val) set(STORAGE_KEY.CURRENT_MODEL, val)
@@ -32,7 +33,7 @@ export const saveCurrentModel = (val) => {
export const getCurrentModel = () => { export const getCurrentModel = () => {
let model = get(STORAGE_KEY.CURRENT_MODEL) let model = get(STORAGE_KEY.CURRENT_MODEL)
if (!model) { if (!model) {
model = DEFAULT_MODEL model = MODELS[DEFAULT_MODEL_NAME]
} }
return model return model
} }

2003
yarn.lock

File diff suppressed because it is too large Load Diff