Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
eff787a2b7 | ||
|
|
8616526136 | ||
|
|
32e3963390 | ||
|
|
7d52e6dd1e | ||
|
|
ba41015df8 | ||
|
|
f084460d1c | ||
|
|
a4ef23d603 | ||
|
|
d3daa654a7 | ||
|
|
9576edf26f | ||
|
|
5b74ac9cc6 | ||
|
|
444e2ec2e8 | ||
|
|
60b7874f65 |
@@ -1,6 +1,6 @@
|
|||||||
|
**/node_modules
|
||||||
|
*/node_modules
|
||||||
node_modules
|
node_modules
|
||||||
Dockerfile
|
Dockerfile
|
||||||
.git
|
.*
|
||||||
.husky
|
*/.*
|
||||||
.github
|
|
||||||
.vscode
|
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -27,3 +27,7 @@ coverage
|
|||||||
*.njsproj
|
*.njsproj
|
||||||
*.sln
|
*.sln
|
||||||
*.sw?
|
*.sw?
|
||||||
|
|
||||||
|
# Environment variables files
|
||||||
|
/service/.env
|
||||||
|
/.env
|
||||||
|
|||||||
27
CHANGELOG.md
27
CHANGELOG.md
@@ -1,3 +1,30 @@
|
|||||||
|
## v2.10.1
|
||||||
|
|
||||||
|
`2023-03-09`
|
||||||
|
|
||||||
|
注意:删除了 `.env` 文件改用 `.env.example` 代替,如果是手动部署的同学现在需要手动创建 `.env` 文件并从 `.env.example` 中复制需要的变量,并且 `.env` 文件现在会在 `Git` 提交中被忽略,原因如下:
|
||||||
|
|
||||||
|
- 在项目中添加 `.env` 从一开始就是个错误的示范
|
||||||
|
- 如果是 `Fork` 项目进行修改测试总是会被 `Git` 修改提示给打扰
|
||||||
|
- 感谢 [yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/395) 的提醒和修改
|
||||||
|
|
||||||
|
|
||||||
|
这两天开始,官方已经开始对第三方代理进行了拉闸, `accessToken` 即将或已经开始可能会不可使用。异常 `API` 使用也开始封号,封号缘由不明,如果出现使用 `API` 提示错误,请查看后端控制台信息,或留意邮箱。
|
||||||
|
|
||||||
|
## Feature
|
||||||
|
- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/393) 添加是否发送上下文开关功能
|
||||||
|
|
||||||
|
## Enhancement
|
||||||
|
- 感谢 [nagaame](https://github.com/Chanzhaoyu/chatgpt-web/pull/415) 优化`docker`打包镜像文件过大的问题
|
||||||
|
- 感谢 [xieccc](https://github.com/Chanzhaoyu/chatgpt-web/pull/404) 新增 `API` 模型配置变量 `OPENAI_API_MODEL`
|
||||||
|
- 感谢 [acongee](https://github.com/Chanzhaoyu/chatgpt-web/pull/394) 优化输出时滚动条问题
|
||||||
|
|
||||||
|
## BugFix
|
||||||
|
- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/392) 修复导出图片会丢失头像的问题
|
||||||
|
- 修复深色模式导出图片的样式问题
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## v2.10.0
|
## v2.10.0
|
||||||
|
|
||||||
`2023-03-07`
|
`2023-03-07`
|
||||||
|
|||||||
12
Dockerfile
12
Dockerfile
@@ -4,7 +4,11 @@ FROM node:lts-alpine AS builder
|
|||||||
COPY ./ /app
|
COPY ./ /app
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
RUN npm install pnpm -g && pnpm install && pnpm run build
|
RUN apk add --no-cache git \
|
||||||
|
&& npm install pnpm -g \
|
||||||
|
&& pnpm install \
|
||||||
|
&& pnpm run build \
|
||||||
|
&& rm -rf /root/.npm /root/.pnpm-store /usr/local/share/.cache /tmp/*
|
||||||
|
|
||||||
# service
|
# service
|
||||||
FROM node:lts-alpine
|
FROM node:lts-alpine
|
||||||
@@ -13,7 +17,11 @@ COPY /service /app
|
|||||||
COPY --from=builder /app/dist /app/public
|
COPY --from=builder /app/dist /app/public
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
RUN npm install pnpm -g && pnpm install
|
RUN apk add --no-cache git \
|
||||||
|
&& npm install pnpm -g \
|
||||||
|
&& pnpm install --only=production \
|
||||||
|
&& rm -rf /root/.npm /root/.pnpm-store /usr/local/share/.cache /tmp/*
|
||||||
|
|
||||||
|
|
||||||
EXPOSE 3002
|
EXPOSE 3002
|
||||||
|
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ Comparison:
|
|||||||
[Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
|
[Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
|
||||||
|
|
||||||
Switching Methods:
|
Switching Methods:
|
||||||
1. Go to the `service/.env` file.
|
1. Go to the `service/.env.example` file and copy the contents to the `service/.env` file.
|
||||||
2. For `OpenAI API Key`, fill in the `OPENAI_API_KEY` field [(Get apiKey)](https://platform.openai.com/overview).
|
2. For `OpenAI API Key`, fill in the `OPENAI_API_KEY` field [(Get apiKey)](https://platform.openai.com/overview).
|
||||||
3. For `Web API`, fill in the `OPENAI_ACCESS_TOKEN` field [(Get accessToken)](https://chat.openai.com/api/auth/session).
|
3. For `Web API`, fill in the `OPENAI_ACCESS_TOKEN` field [(Get accessToken)](https://chat.openai.com/api/auth/session).
|
||||||
4. When both are present, `OpenAI API Key` takes precedence.
|
4. When both are present, `OpenAI API Key` takes precedence.
|
||||||
@@ -168,6 +168,7 @@ pnpm dev
|
|||||||
- `OPENAI_API_KEY` one of two
|
- `OPENAI_API_KEY` one of two
|
||||||
- `OPENAI_ACCESS_TOKEN` one of two, `OPENAI_API_KEY` takes precedence when both are present
|
- `OPENAI_ACCESS_TOKEN` one of two, `OPENAI_API_KEY` takes precedence when both are present
|
||||||
- `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set
|
- `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set
|
||||||
|
- `OPENAI_API_MODEL` optional, available when `OPENAI_API_KEY` is set
|
||||||
- `API_REVERSE_PROXY` optional, available when `OPENAI_ACCESS_TOKEN` is set [Reference](#introduction)
|
- `API_REVERSE_PROXY` optional, available when `OPENAI_ACCESS_TOKEN` is set [Reference](#introduction)
|
||||||
- `AUTH_SECRET_KEY` Access Password,optional
|
- `AUTH_SECRET_KEY` Access Password,optional
|
||||||
- `TIMEOUT_MS` timeout, in milliseconds, optional
|
- `TIMEOUT_MS` timeout, in milliseconds, optional
|
||||||
@@ -210,6 +211,8 @@ services:
|
|||||||
OPENAI_ACCESS_TOKEN: xxxxxx
|
OPENAI_ACCESS_TOKEN: xxxxxx
|
||||||
# api interface url, optional, available when OPENAI_API_KEY is set
|
# api interface url, optional, available when OPENAI_API_KEY is set
|
||||||
OPENAI_API_BASE_URL: xxxx
|
OPENAI_API_BASE_URL: xxxx
|
||||||
|
# api model, optional, available when OPENAI_API_KEY is set
|
||||||
|
OPENAI_API_MODEL: xxxx
|
||||||
# reverse proxy, optional
|
# reverse proxy, optional
|
||||||
API_REVERSE_PROXY: xxx
|
API_REVERSE_PROXY: xxx
|
||||||
# access password,optional
|
# access password,optional
|
||||||
@@ -222,6 +225,7 @@ services:
|
|||||||
SOCKS_PROXY_PORT: xxxx
|
SOCKS_PROXY_PORT: xxxx
|
||||||
```
|
```
|
||||||
The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`.
|
The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`.
|
||||||
|
The `OPENAI_API_MODEL` is optional and only used when setting the `OPENAI_API_KEY`.
|
||||||
|
|
||||||
### Deployment with Railway
|
### Deployment with Railway
|
||||||
|
|
||||||
@@ -237,6 +241,7 @@ The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API
|
|||||||
| `OPENAI_API_KEY` | Optional | Required for `OpenAI API`. `apiKey` can be obtained from [here](https://platform.openai.com/overview). |
|
| `OPENAI_API_KEY` | Optional | Required for `OpenAI API`. `apiKey` can be obtained from [here](https://platform.openai.com/overview). |
|
||||||
| `OPENAI_ACCESS_TOKEN`| Optional | Required for `Web API`. `accessToken` can be obtained from [here](https://chat.openai.com/api/auth/session).|
|
| `OPENAI_ACCESS_TOKEN`| Optional | Required for `Web API`. `accessToken` can be obtained from [here](https://chat.openai.com/api/auth/session).|
|
||||||
| `OPENAI_API_BASE_URL` | Optional, only for `OpenAI API` | API endpoint. |
|
| `OPENAI_API_BASE_URL` | Optional, only for `OpenAI API` | API endpoint. |
|
||||||
|
| `OPENAI_API_MODEL` | Optional, only for `OpenAI API` | API model. |
|
||||||
| `API_REVERSE_PROXY` | Optional, only for `Web API` | Reverse proxy address for `Web API`. [Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
|
| `API_REVERSE_PROXY` | Optional, only for `Web API` | Reverse proxy address for `Web API`. [Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
|
||||||
| `SOCKS_PROXY_HOST` | Optional, effective with `SOCKS_PROXY_PORT` | Socks proxy. |
|
| `SOCKS_PROXY_HOST` | Optional, effective with `SOCKS_PROXY_PORT` | Socks proxy. |
|
||||||
| `SOCKS_PROXY_PORT` | Optional, effective with `SOCKS_PROXY_HOST` | Socks proxy port. |
|
| `SOCKS_PROXY_PORT` | Optional, effective with `SOCKS_PROXY_HOST` | Socks proxy port. |
|
||||||
@@ -266,7 +271,7 @@ PS: You can also run `pnpm start` directly on the server without packaging.
|
|||||||
|
|
||||||
#### Frontend webpage
|
#### Frontend webpage
|
||||||
|
|
||||||
1. Modify `VITE_APP_API_BASE_URL` in `.env` at the root directory to your actual backend interface address.
|
1. Refer to the root directory `.env.example` file content to create `.env` file, modify `VITE_APP_API_BASE_URL` in `.env` at the root directory to your actual backend interface address.
|
||||||
2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service.
|
2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service.
|
||||||
|
|
||||||
[Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app)
|
[Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app)
|
||||||
|
|||||||
@@ -54,7 +54,7 @@
|
|||||||
[查看详情](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
|
[查看详情](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
|
||||||
|
|
||||||
切换方式:
|
切换方式:
|
||||||
1. 进入 `service/.env` 文件
|
1. 进入 `service/.env.example` 文件,复制内容到 `service/.env` 文件
|
||||||
2. 使用 `OpenAI API Key` 请填写 `OPENAI_API_KEY` 字段 [(获取 apiKey)](https://platform.openai.com/overview)
|
2. 使用 `OpenAI API Key` 请填写 `OPENAI_API_KEY` 字段 [(获取 apiKey)](https://platform.openai.com/overview)
|
||||||
3. 使用 `Web API` 请填写 `OPENAI_ACCESS_TOKEN` 字段 [(获取 accessToken)](https://chat.openai.com/api/auth/session)
|
3. 使用 `Web API` 请填写 `OPENAI_ACCESS_TOKEN` 字段 [(获取 accessToken)](https://chat.openai.com/api/auth/session)
|
||||||
4. 同时存在时以 `OpenAI API Key` 优先
|
4. 同时存在时以 `OpenAI API Key` 优先
|
||||||
@@ -166,6 +166,7 @@ pnpm dev
|
|||||||
- `OPENAI_API_KEY` 二选一
|
- `OPENAI_API_KEY` 二选一
|
||||||
- `OPENAI_ACCESS_TOKEN` 二选一,同时存在时,`OPENAI_API_KEY` 优先
|
- `OPENAI_ACCESS_TOKEN` 二选一,同时存在时,`OPENAI_API_KEY` 优先
|
||||||
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
|
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
|
||||||
|
- `OPENAI_API_MODEL` 可选,设置 `OPENAI_API_KEY` 时可用
|
||||||
- `API_REVERSE_PROXY` 可选,设置 `OPENAI_ACCESS_TOKEN` 时可用 [参考](#介绍)
|
- `API_REVERSE_PROXY` 可选,设置 `OPENAI_ACCESS_TOKEN` 时可用 [参考](#介绍)
|
||||||
- `AUTH_SECRET_KEY` 访问权限密钥,可选
|
- `AUTH_SECRET_KEY` 访问权限密钥,可选
|
||||||
- `TIMEOUT_MS` 超时,单位毫秒,可选
|
- `TIMEOUT_MS` 超时,单位毫秒,可选
|
||||||
@@ -208,6 +209,8 @@ services:
|
|||||||
OPENAI_ACCESS_TOKEN: xxxxxx
|
OPENAI_ACCESS_TOKEN: xxxxxx
|
||||||
# API接口地址,可选,设置 OPENAI_API_KEY 时可用
|
# API接口地址,可选,设置 OPENAI_API_KEY 时可用
|
||||||
OPENAI_API_BASE_URL: xxxx
|
OPENAI_API_BASE_URL: xxxx
|
||||||
|
# API模型,可选,设置 OPENAI_API_KEY 时可用
|
||||||
|
OPENAI_API_MODEL: xxxx
|
||||||
# 反向代理,可选
|
# 反向代理,可选
|
||||||
API_REVERSE_PROXY: xxx
|
API_REVERSE_PROXY: xxx
|
||||||
# 访问权限密钥,可选
|
# 访问权限密钥,可选
|
||||||
@@ -220,6 +223,7 @@ services:
|
|||||||
SOCKS_PROXY_PORT: xxxx
|
SOCKS_PROXY_PORT: xxxx
|
||||||
```
|
```
|
||||||
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
|
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
|
||||||
|
- `OPENAI_API_MODEL` 可选,设置 `OPENAI_API_KEY` 时可用
|
||||||
### 使用 Railway 部署
|
### 使用 Railway 部署
|
||||||
|
|
||||||
[](https://railway.app/new/template/yytmgc)
|
[](https://railway.app/new/template/yytmgc)
|
||||||
@@ -234,6 +238,7 @@ services:
|
|||||||
| `OPENAI_API_KEY` | `OpenAI API` 二选一 | 使用 `OpenAI API` 所需的 `apiKey` [(获取 apiKey)](https://platform.openai.com/overview) |
|
| `OPENAI_API_KEY` | `OpenAI API` 二选一 | 使用 `OpenAI API` 所需的 `apiKey` [(获取 apiKey)](https://platform.openai.com/overview) |
|
||||||
| `OPENAI_ACCESS_TOKEN` | `Web API` 二选一 | 使用 `Web API` 所需的 `accessToken` [(获取 accessToken)](https://chat.openai.com/api/auth/session) |
|
| `OPENAI_ACCESS_TOKEN` | `Web API` 二选一 | 使用 `Web API` 所需的 `accessToken` [(获取 accessToken)](https://chat.openai.com/api/auth/session) |
|
||||||
| `OPENAI_API_BASE_URL` | 可选,`OpenAI API` 时可用 | `API`接口地址 |
|
| `OPENAI_API_BASE_URL` | 可选,`OpenAI API` 时可用 | `API`接口地址 |
|
||||||
|
| `OPENAI_API_MODEL` | 可选,`OpenAI API` 时可用 | `API`模型 |
|
||||||
| `API_REVERSE_PROXY` | 可选,`Web API` 时可用 | `Web API` 反向代理地址 [详情](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
|
| `API_REVERSE_PROXY` | 可选,`Web API` 时可用 | `Web API` 反向代理地址 [详情](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
|
||||||
| `SOCKS_PROXY_HOST` | 可选,和 `SOCKS_PROXY_PORT` 一起时生效 | Socks代理 |
|
| `SOCKS_PROXY_HOST` | 可选,和 `SOCKS_PROXY_PORT` 一起时生效 | Socks代理 |
|
||||||
| `SOCKS_PROXY_PORT` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理端口 |
|
| `SOCKS_PROXY_PORT` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理端口 |
|
||||||
@@ -261,7 +266,7 @@ PS: 不进行打包,直接在服务器上运行 `pnpm start` 也可
|
|||||||
|
|
||||||
#### 前端网页
|
#### 前端网页
|
||||||
|
|
||||||
1、修改根目录下 `.env` 内 `VITE_APP_API_BASE_URL` 为你的实际后端接口地址
|
1、参考根目录下 `.env.example` 文件内容创建 `.env` 文件,修改 `VITE_APP_API_BASE_URL` 为你的实际后端接口地址
|
||||||
|
|
||||||
2、根目录下运行以下命令,然后将 `dist` 文件夹内的文件复制到你网站服务的根目录下
|
2、根目录下运行以下命令,然后将 `dist` 文件夹内的文件复制到你网站服务的根目录下
|
||||||
|
|
||||||
|
|||||||
@@ -12,6 +12,8 @@ services:
|
|||||||
OPENAI_ACCESS_TOKEN: xxxxxx
|
OPENAI_ACCESS_TOKEN: xxxxxx
|
||||||
# API接口地址,可选,设置 OPENAI_API_KEY 时可用
|
# API接口地址,可选,设置 OPENAI_API_KEY 时可用
|
||||||
OPENAI_API_BASE_URL: xxxx
|
OPENAI_API_BASE_URL: xxxx
|
||||||
|
# API模型,可选,设置 OPENAI_API_KEY 时可用
|
||||||
|
OPENAI_API_MODEL: xxxx
|
||||||
# 反向代理,可选
|
# 反向代理,可选
|
||||||
API_REVERSE_PROXY: xxx
|
API_REVERSE_PROXY: xxx
|
||||||
# 访问权限密钥,可选
|
# 访问权限密钥,可选
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "chatgpt-web",
|
"name": "chatgpt-web",
|
||||||
"version": "2.10.0",
|
"version": "2.10.1",
|
||||||
"private": false,
|
"private": false,
|
||||||
"description": "ChatGPT Web",
|
"description": "ChatGPT Web",
|
||||||
"author": "ChenZhaoYu <chenzhaoyu1994@gmail.com>",
|
"author": "ChenZhaoYu <chenzhaoyu1994@gmail.com>",
|
||||||
|
|||||||
@@ -7,6 +7,9 @@ OPENAI_ACCESS_TOKEN=
|
|||||||
# OpenAI API Base URL - https://api.openai.com
|
# OpenAI API Base URL - https://api.openai.com
|
||||||
OPENAI_API_BASE_URL=
|
OPENAI_API_BASE_URL=
|
||||||
|
|
||||||
|
# OpenAI API Model - https://platform.openai.com/docs/models
|
||||||
|
OPENAI_API_MODEL=
|
||||||
|
|
||||||
# Reverse Proxy
|
# Reverse Proxy
|
||||||
API_REVERSE_PROXY=
|
API_REVERSE_PROXY=
|
||||||
|
|
||||||
@@ -8,10 +8,8 @@ import { sendResponse } from '../utils'
|
|||||||
import type { ApiModel, ChatContext, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types'
|
import type { ApiModel, ChatContext, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types'
|
||||||
|
|
||||||
const ErrorCodeMessage: Record<string, string> = {
|
const ErrorCodeMessage: Record<string, string> = {
|
||||||
400: '[OpenAI] 模型的最大上下文长度是4096个令牌,请减少信息的长度。| This model\'s maximum context length is 4096 tokens.',
|
|
||||||
401: '[OpenAI] 提供错误的API密钥 | Incorrect API key provided',
|
401: '[OpenAI] 提供错误的API密钥 | Incorrect API key provided',
|
||||||
403: '[OpenAI] 服务器拒绝访问,请稍后再试 | Server refused to access, please try again later',
|
403: '[OpenAI] 服务器拒绝访问,请稍后再试 | Server refused to access, please try again later',
|
||||||
429: '[OpenAI] 服务器限流,请稍后再试 | Server was limited, please try again later',
|
|
||||||
502: '[OpenAI] 错误的网关 | Bad Gateway',
|
502: '[OpenAI] 错误的网关 | Bad Gateway',
|
||||||
503: '[OpenAI] 服务器繁忙,请稍后再试 | Server is busy, please try again later',
|
503: '[OpenAI] 服务器繁忙,请稍后再试 | Server is busy, please try again later',
|
||||||
504: '[OpenAI] 网关超时 | Gateway Time-out',
|
504: '[OpenAI] 网关超时 | Gateway Time-out',
|
||||||
@@ -36,7 +34,7 @@ let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI
|
|||||||
const options: ChatGPTAPIOptions = {
|
const options: ChatGPTAPIOptions = {
|
||||||
apiKey: process.env.OPENAI_API_KEY,
|
apiKey: process.env.OPENAI_API_KEY,
|
||||||
completionParams: {
|
completionParams: {
|
||||||
model: 'gpt-3.5-turbo',
|
model: process.env.OPENAI_API_MODEL ?? 'gpt-3.5-turbo',
|
||||||
},
|
},
|
||||||
debug: false,
|
debug: false,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,6 +27,9 @@ export default {
|
|||||||
exportImageConfirm: 'Are you sure to export this chat to png?',
|
exportImageConfirm: 'Are you sure to export this chat to png?',
|
||||||
exportSuccess: 'Export Success',
|
exportSuccess: 'Export Success',
|
||||||
exportFailed: 'Export Failed',
|
exportFailed: 'Export Failed',
|
||||||
|
usingContext: 'Context Mode',
|
||||||
|
turnOnContext: 'In the current mode, sending messages will carry previous chat records.',
|
||||||
|
turnOffContext: 'In the current mode, sending messages will not carry previous chat records.',
|
||||||
deleteMessage: 'Delete Message',
|
deleteMessage: 'Delete Message',
|
||||||
deleteMessageConfirm: 'Are you sure to delete this message?',
|
deleteMessageConfirm: 'Are you sure to delete this message?',
|
||||||
deleteHistoryConfirm: 'Are you sure to clear this history?',
|
deleteHistoryConfirm: 'Are you sure to clear this history?',
|
||||||
|
|||||||
@@ -27,6 +27,9 @@ export default {
|
|||||||
exportImageConfirm: '是否将会话保存为图片?',
|
exportImageConfirm: '是否将会话保存为图片?',
|
||||||
exportSuccess: '保存成功',
|
exportSuccess: '保存成功',
|
||||||
exportFailed: '保存失败',
|
exportFailed: '保存失败',
|
||||||
|
usingContext: '上下文模式',
|
||||||
|
turnOnContext: '当前模式下, 发送消息会携带之前的聊天记录',
|
||||||
|
turnOffContext: '当前模式下, 发送消息不会携带之前的聊天记录',
|
||||||
deleteMessage: '删除消息',
|
deleteMessage: '删除消息',
|
||||||
deleteMessageConfirm: '是否删除此消息?',
|
deleteMessageConfirm: '是否删除此消息?',
|
||||||
deleteHistoryConfirm: '确定删除此记录?',
|
deleteHistoryConfirm: '确定删除此记录?',
|
||||||
|
|||||||
@@ -27,6 +27,9 @@ export default {
|
|||||||
exportImageConfirm: '是否將對話儲存為圖片?',
|
exportImageConfirm: '是否將對話儲存為圖片?',
|
||||||
exportSuccess: '儲存成功',
|
exportSuccess: '儲存成功',
|
||||||
exportFailed: '儲存失敗',
|
exportFailed: '儲存失敗',
|
||||||
|
usingContext: '上下文模式',
|
||||||
|
turnOnContext: '在當前模式下, 發送訊息會攜帶之前的聊天記錄。',
|
||||||
|
turnOffContext: '在當前模式下, 發送訊息不會攜帶之前的聊天記錄。',
|
||||||
deleteMessage: '刪除訊息',
|
deleteMessage: '刪除訊息',
|
||||||
deleteMessageConfirm: '是否刪除此訊息?',
|
deleteMessageConfirm: '是否刪除此訊息?',
|
||||||
deleteHistoryConfirm: '確定刪除此紀錄?',
|
deleteHistoryConfirm: '確定刪除此紀錄?',
|
||||||
|
|||||||
@@ -38,7 +38,7 @@ const wrapClass = computed(() => {
|
|||||||
'text-wrap',
|
'text-wrap',
|
||||||
'min-w-[20px]',
|
'min-w-[20px]',
|
||||||
'rounded-md',
|
'rounded-md',
|
||||||
isMobile.value ? 'p-2' : 'p-3',
|
isMobile.value ? 'p-2' : 'px-3 py-2',
|
||||||
props.inversion ? 'bg-[#d2f9d1]' : 'bg-[#f4f6f8]',
|
props.inversion ? 'bg-[#d2f9d1]' : 'bg-[#f4f6f8]',
|
||||||
props.inversion ? 'dark:bg-[#a1dc95]' : 'dark:bg-[#1e1e20]',
|
props.inversion ? 'dark:bg-[#a1dc95]' : 'dark:bg-[#1e1e20]',
|
||||||
{ 'text-red-500': props.error },
|
{ 'text-red-500': props.error },
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ interface ScrollReturn {
|
|||||||
scrollRef: Ref<ScrollElement>
|
scrollRef: Ref<ScrollElement>
|
||||||
scrollToBottom: () => Promise<void>
|
scrollToBottom: () => Promise<void>
|
||||||
scrollToTop: () => Promise<void>
|
scrollToTop: () => Promise<void>
|
||||||
|
scrollToBottomIfAtBottom: () => Promise<void>
|
||||||
}
|
}
|
||||||
|
|
||||||
export function useScroll(): ScrollReturn {
|
export function useScroll(): ScrollReturn {
|
||||||
@@ -24,9 +25,20 @@ export function useScroll(): ScrollReturn {
|
|||||||
scrollRef.value.scrollTop = 0
|
scrollRef.value.scrollTop = 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const scrollToBottomIfAtBottom = async () => {
|
||||||
|
await nextTick()
|
||||||
|
if (scrollRef.value) {
|
||||||
|
const threshold = 50 // 阈值,表示滚动条到底部的距离阈值
|
||||||
|
const distanceToBottom = scrollRef.value.scrollHeight - scrollRef.value.scrollTop - scrollRef.value.clientHeight
|
||||||
|
if (distanceToBottom <= threshold)
|
||||||
|
scrollRef.value.scrollTop = scrollRef.value.scrollHeight
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
scrollRef,
|
scrollRef,
|
||||||
scrollToBottom,
|
scrollToBottom,
|
||||||
scrollToTop,
|
scrollToTop,
|
||||||
|
scrollToBottomIfAtBottom,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ const chatStore = useChatStore()
|
|||||||
useCopyCode()
|
useCopyCode()
|
||||||
const { isMobile } = useBasicLayout()
|
const { isMobile } = useBasicLayout()
|
||||||
const { addChat, updateChat, updateChatSome, getChatByUuidAndIndex } = useChat()
|
const { addChat, updateChat, updateChatSome, getChatByUuidAndIndex } = useChat()
|
||||||
const { scrollRef, scrollToBottom } = useScroll()
|
const { scrollRef, scrollToBottom, scrollToBottomIfAtBottom } = useScroll()
|
||||||
|
|
||||||
const { uuid } = route.params as { uuid: string }
|
const { uuid } = route.params as { uuid: string }
|
||||||
|
|
||||||
@@ -33,6 +33,7 @@ const conversationList = computed(() => dataSources.value.filter(item => (!item.
|
|||||||
|
|
||||||
const prompt = ref<string>('')
|
const prompt = ref<string>('')
|
||||||
const loading = ref<boolean>(false)
|
const loading = ref<boolean>(false)
|
||||||
|
const usingContext = ref<boolean>(true)
|
||||||
|
|
||||||
function handleSubmit() {
|
function handleSubmit() {
|
||||||
onConversation()
|
onConversation()
|
||||||
@@ -68,7 +69,7 @@ async function onConversation() {
|
|||||||
let options: Chat.ConversationRequest = {}
|
let options: Chat.ConversationRequest = {}
|
||||||
const lastContext = conversationList.value[conversationList.value.length - 1]?.conversationOptions
|
const lastContext = conversationList.value[conversationList.value.length - 1]?.conversationOptions
|
||||||
|
|
||||||
if (lastContext)
|
if (lastContext && usingContext.value)
|
||||||
options = { ...lastContext }
|
options = { ...lastContext }
|
||||||
|
|
||||||
addChat(
|
addChat(
|
||||||
@@ -113,14 +114,13 @@ async function onConversation() {
|
|||||||
requestOptions: { prompt: message, options: { ...options } },
|
requestOptions: { prompt: message, options: { ...options } },
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
scrollToBottom()
|
scrollToBottomIfAtBottom()
|
||||||
}
|
}
|
||||||
catch (error) {
|
catch (error) {
|
||||||
//
|
//
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
scrollToBottom()
|
|
||||||
}
|
}
|
||||||
catch (error: any) {
|
catch (error: any) {
|
||||||
const errorMessage = error?.message ?? t('common.wrong')
|
const errorMessage = error?.message ?? t('common.wrong')
|
||||||
@@ -165,10 +165,10 @@ async function onConversation() {
|
|||||||
requestOptions: { prompt: message, options: { ...options } },
|
requestOptions: { prompt: message, options: { ...options } },
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
scrollToBottom()
|
|
||||||
}
|
}
|
||||||
finally {
|
finally {
|
||||||
loading.value = false
|
loading.value = false
|
||||||
|
scrollToBottom()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -284,7 +284,9 @@ function handleExport() {
|
|||||||
try {
|
try {
|
||||||
d.loading = true
|
d.loading = true
|
||||||
const ele = document.getElementById('image-wrapper')
|
const ele = document.getElementById('image-wrapper')
|
||||||
const canvas = await html2canvas(ele as HTMLDivElement)
|
const canvas = await html2canvas(ele as HTMLDivElement, {
|
||||||
|
useCORS: true,
|
||||||
|
})
|
||||||
const imgUrl = canvas.toDataURL('image/png')
|
const imgUrl = canvas.toDataURL('image/png')
|
||||||
const tempLink = document.createElement('a')
|
const tempLink = document.createElement('a')
|
||||||
tempLink.style.display = 'none'
|
tempLink.style.display = 'none'
|
||||||
@@ -363,6 +365,24 @@ function handleStop() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function toggleUsingContext() {
|
||||||
|
usingContext.value = !usingContext.value
|
||||||
|
if (usingContext.value) {
|
||||||
|
dialog.info({
|
||||||
|
title: t('chat.usingContext'),
|
||||||
|
content: t('chat.turnOnContext'),
|
||||||
|
positiveText: t('common.yes'),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
dialog.info({
|
||||||
|
title: t('chat.usingContext'),
|
||||||
|
content: t('chat.turnOffContext'),
|
||||||
|
positiveText: t('common.yes'),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const placeholder = computed(() => {
|
const placeholder = computed(() => {
|
||||||
if (isMobile.value)
|
if (isMobile.value)
|
||||||
return t('chat.placeholderMobile')
|
return t('chat.placeholderMobile')
|
||||||
@@ -404,7 +424,11 @@ onUnmounted(() => {
|
|||||||
ref="scrollRef"
|
ref="scrollRef"
|
||||||
class="h-full overflow-hidden overflow-y-auto"
|
class="h-full overflow-hidden overflow-y-auto"
|
||||||
>
|
>
|
||||||
<div id="image-wrapper" class="w-full max-w-screen-xl m-auto" :class="[isMobile ? 'p-2' : 'p-4']">
|
<div
|
||||||
|
id="image-wrapper"
|
||||||
|
class="w-full max-w-screen-xl m-auto dark:bg-[#101014]"
|
||||||
|
:class="[isMobile ? 'p-2' : 'p-4']"
|
||||||
|
>
|
||||||
<template v-if="!dataSources.length">
|
<template v-if="!dataSources.length">
|
||||||
<div class="flex items-center justify-center mt-4 text-center text-neutral-300">
|
<div class="flex items-center justify-center mt-4 text-center text-neutral-300">
|
||||||
<SvgIcon icon="ri:bubble-chart-fill" class="mr-2 text-3xl" />
|
<SvgIcon icon="ri:bubble-chart-fill" class="mr-2 text-3xl" />
|
||||||
@@ -450,6 +474,11 @@ onUnmounted(() => {
|
|||||||
<SvgIcon icon="ri:download-2-line" />
|
<SvgIcon icon="ri:download-2-line" />
|
||||||
</span>
|
</span>
|
||||||
</HoverButton>
|
</HoverButton>
|
||||||
|
<HoverButton @click="toggleUsingContext">
|
||||||
|
<span class="text-xl" :class="{ 'text-[#4b9e5f]': usingContext, 'text-[#a8071a]': !usingContext }">
|
||||||
|
<SvgIcon icon="ri:chat-history-line" />
|
||||||
|
</span>
|
||||||
|
</HoverButton>
|
||||||
<NInput
|
<NInput
|
||||||
v-model:value="prompt"
|
v-model:value="prompt"
|
||||||
type="textarea"
|
type="textarea"
|
||||||
|
|||||||
Reference in New Issue
Block a user