Compare commits

30 Commits

Author SHA1 Message Date
ChenZhaoYu
24c203f8d4 chore: v2.10.3 2023-03-10 14:48:42 +08:00
cloudGrin
cbcbdda4c0 chore: 简化docker-compose部署 (#466) 2023-03-10 14:33:36 +08:00
ChenZhaoYu
a2fcd31f24 perf: use hooks 2023-03-10 14:30:02 +08:00
ChenZhaoYu
07b92d4ede fix: 文字过长时头像被挤压 2023-03-10 14:20:56 +08:00
ChenZhaoYu
ed4ff67760 feat: 移动端按钮调整到顶部 2023-03-10 14:05:59 +08:00
ChenZhaoYu
9c4644c969 feat: iOS 安全距离 2023-03-10 14:04:50 +08:00
Xc
dd91c2a4e0 feat: support pwa (#452)
* feat: support pwa

* feat: support pwa
2023-03-10 13:25:30 +08:00
RyanzeX
7e8e15a628 fix: reset history title on clear (#453) 2023-03-10 13:24:36 +08:00
Yige
076c56d1d9 feat: 支持长回复 (#450)
* chore: rename environment variables files

* docs: update README.md about .env file

* feat: support long reply

* chore: upgrade chatgpt package and set long reply to false default

* chore: set long reply to false default
2023-03-10 13:23:22 +08:00
ChenZhaoYu
133a24e25f chore: update chatgpt 2023-03-10 13:11:36 +08:00
ChenZhaoYu
73a67b8f64 chore: 2.10.2_2 2023-03-10 00:47:56 +08:00
ChenZhaoYu
222b3eaa4c feat: 移动端下按钮收缩 2023-03-10 00:46:09 +08:00
ChenZhaoYu
c4baccdc48 chore: v2.10.2_1 2023-03-10 00:03:19 +08:00
ChenZhaoYu
7021a08ecf chore: reset scrollToBottom 2023-03-10 00:02:02 +08:00
ChenZhaoYu
e88b9bef13 chore: reset .env 2023-03-09 23:57:47 +08:00
ChenZhaoYu
c17cc16c0e chore: 异常打印和调整日志 2023-03-09 22:58:06 +08:00
ChenZhaoYu
d7d037618f chore: version 2.10.2 2023-03-09 22:49:15 +08:00
ChenZhaoYu
01edad7717 fix: 修复默认模型判断错误 2023-03-09 22:45:43 +08:00
ChenZhaoYu
eff787a2b7 chore: 移除两种报错 2023-03-09 20:16:18 +08:00
ChenZhaoYu
8616526136 fix: typo 2023-03-09 19:41:27 +08:00
ChenZhaoYu
32e3963390 chore: v2.10.1 2023-03-09 19:25:19 +08:00
ChenZhaoYu
7d52e6dd1e feat: model typo 2023-03-09 19:13:18 +08:00
ChenZhaoYu
ba41015df8 Merge branch 'feature' 2023-03-09 19:02:22 +08:00
ChenZhaoYu
f084460d1c fix: 深色模式导出图片的样式问题 2023-03-09 18:57:46 +08:00
Yige
a4ef23d603 chore: rename environment variables files (#395)
* chore: rename environment variables files

* docs: update README.md about .env file
2023-03-09 18:41:33 +08:00
xieccc
d3daa654a7 feat: 新增API模型配置项 (#404)
* chore: 更新文档

* Improve zh-TW locale (#379)

* fix: 移动端样式

* feat: typo

* fix: 调整滚动回原样

* feat: 新增API模型配置项

---------

Co-authored-by: ChenZhaoYu <790348264@qq.com>
Co-authored-by: Peter Dave Hello <hsu@peterdavehello.org>
2023-03-09 18:38:30 +08:00
acongee
9576edf26f feat: 优化打字机模式显示新内容的逻辑 (#394)
* 添加为打字机模式优化的滚动至底部函数

* feat: 优化打字机模式始终显示最新内容的逻辑

* feat: 内容输出结束时也根据滚动条位置判断是否滚动至底部
2023-03-09 18:15:13 +08:00
CornerSkyless
5b74ac9cc6 fix: 修复导出图片会丢失头像的问题 (#392) 2023-03-09 17:59:26 +08:00
CornerSkyless
444e2ec2e8 feat: 增加了一个可以是否要发送历史上下文的开关 (#393)
* chore: 更新文档

* Improve zh-TW locale (#379)

* fix: 移动端样式

* feat: typo

* fix: 调整滚动回原样

* feat: 支持切换是否要发送前文信息

* style: 修复 lint

---------

Co-authored-by: ChenZhaoYu <790348264@qq.com>
Co-authored-by: Peter Dave Hello <hsu@peterdavehello.org>
2023-03-09 17:56:11 +08:00
壳壳中的宇宙
60b7874f65 优化docker打包镜像文件过大的问题 (#415)
* chore: 更新文档

* Improve zh-TW locale (#379)

* fix: 移动端样式

* feat: typo

* fix: 调整滚动回原样

* :zap:优化docker打包镜像文件过大

---------

Co-authored-by: ChenZhaoYu <790348264@qq.com>
Co-authored-by: Peter Dave Hello <hsu@peterdavehello.org>
2023-03-09 17:52:39 +08:00
34 changed files with 2243 additions and 189 deletions

View File

@@ -1,6 +1,6 @@
**/node_modules
*/node_modules
node_modules node_modules
Dockerfile Dockerfile
.git .*
.husky */.*
.github
.vscode

3
.env
View File

@@ -2,3 +2,6 @@
VITE_GLOB_API_URL=/api VITE_GLOB_API_URL=/api
VITE_APP_API_BASE_URL=http://localhost:3002/ VITE_APP_API_BASE_URL=http://localhost:3002/
# Whether long replies are supported, which may result in higher API fees
VITE_GLOB_OPEN_LONG_REPLY=false

3
.gitignore vendored
View File

@@ -27,3 +27,6 @@ coverage
*.njsproj *.njsproj
*.sln *.sln
*.sw? *.sw?
# Environment variables files
/service/.env

View File

@@ -1,3 +1,64 @@
## v2.10.3
`2023-03-10`
> 声明:除 `ChatGPTUnofficialProxyAPI` 使用的非官方代理外,本项目代码包括上游引用包均开源在 `GitHub`如果你觉得本项目有监控后门或有问题导致你的账号、API被封那我很抱歉。我可能`BUG`写的多,但我不缺德。此次主要为前端界面调整,周末愉快。
## Feature
- 支持长回复 [[yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/450)][[详情](https://github.com/Chanzhaoyu/chatgpt-web/pull/450)]
- 支持 `PWA` [[chenxch](https://github.com/Chanzhaoyu/chatgpt-web/pull/452)]
## Enhancement
- 调整移动端按钮和优化布局
- 调整 `iOS` 上安全距离
- 简化 `docker-compose` 部署 [[cloudGrin](https://github.com/Chanzhaoyu/chatgpt-web/pull/466)]
## BugFix
- 修复清空会话侧边栏标题不会重置的问题 [[RyanXinOne](https://github.com/Chanzhaoyu/chatgpt-web/pull/453)]
- 修复设置文字过长时导致的设置按钮消失的问题
## Other
- 更新依赖
## v2.10.2
`2023-03-09`
衔接 `2.10.1` 版本[详情](https://github.com/Chanzhaoyu/chatgpt-web/releases/tag/v2.10.1)
## Enhancement
- 移动端下输入框获得焦点时左侧按钮隐藏
## BugFix
- 修复 `2.10.1` 中添加 `OPENAI_API_MODEL` 变量的判断错误,会导致默认模型指定失效,抱歉
- 回退 `2.10.1` 中前端变量影响 `Docker` 打包
## v2.10.1
`2023-03-09`
注意:删除了 `.env` 文件改用 `.env.example` 代替,如果是手动部署的同学现在需要手动创建 `.env` 文件并从 `.env.example` 中复制需要的变量,并且 `.env` 文件现在会在 `Git` 提交中被忽略,原因如下:
- 在项目中添加 `.env` 从一开始就是个错误的示范
- 如果是 `Fork` 项目进行修改测试总是会被 `Git` 修改提示给打扰
- 感谢 [yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/395) 的提醒和修改
这两天开始,官方已经开始对第三方代理进行了拉闸, `accessToken` 即将或已经开始可能会不可使用。异常 `API` 使用也开始封号,封号缘由不明,如果出现使用 `API` 提示错误,请查看后端控制台信息,或留意邮箱。
## Feature
- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/393) 添加是否发送上下文开关功能
## Enhancement
- 感谢 [nagaame](https://github.com/Chanzhaoyu/chatgpt-web/pull/415) 优化`docker`打包镜像文件过大的问题
- 感谢 [xieccc](https://github.com/Chanzhaoyu/chatgpt-web/pull/404) 新增 `API` 模型配置变量 `OPENAI_API_MODEL`
- 感谢 [acongee](https://github.com/Chanzhaoyu/chatgpt-web/pull/394) 优化输出时滚动条问题
## BugFix
- 感谢 [CornerSkyless](https://github.com/Chanzhaoyu/chatgpt-web/pull/392) 修复导出图片会丢失头像的问题
- 修复深色模式导出图片的样式问题
## v2.10.0 ## v2.10.0
`2023-03-07` `2023-03-07`

View File

@@ -4,7 +4,11 @@ FROM node:lts-alpine AS builder
COPY ./ /app COPY ./ /app
WORKDIR /app WORKDIR /app
RUN npm install pnpm -g && pnpm install && pnpm run build RUN apk add --no-cache git \
&& npm install pnpm -g \
&& pnpm install \
&& pnpm run build \
&& rm -rf /root/.npm /root/.pnpm-store /usr/local/share/.cache /tmp/*
# service # service
FROM node:lts-alpine FROM node:lts-alpine
@@ -13,8 +17,12 @@ COPY /service /app
COPY --from=builder /app/dist /app/public COPY --from=builder /app/dist /app/public
WORKDIR /app WORKDIR /app
RUN npm install pnpm -g && pnpm install RUN apk add --no-cache git \
&& npm install pnpm -g \
&& pnpm install --only=production \
&& rm -rf /root/.npm /root/.pnpm-store /usr/local/share/.cache /tmp/*
EXPOSE 3002 EXPOSE 3002
CMD ["pnpm", "run", "start"] CMD ["pnpm", "run", "start"]

View File

@@ -55,7 +55,7 @@ Comparison:
[Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138) [Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
Switching Methods: Switching Methods:
1. Go to the `service/.env` file. 1. Go to the `service/.env.example` file and copy the contents to the `service/.env` file.
2. For `OpenAI API Key`, fill in the `OPENAI_API_KEY` field [(Get apiKey)](https://platform.openai.com/overview). 2. For `OpenAI API Key`, fill in the `OPENAI_API_KEY` field [(Get apiKey)](https://platform.openai.com/overview).
3. For `Web API`, fill in the `OPENAI_ACCESS_TOKEN` field [(Get accessToken)](https://chat.openai.com/api/auth/session). 3. For `Web API`, fill in the `OPENAI_ACCESS_TOKEN` field [(Get accessToken)](https://chat.openai.com/api/auth/session).
4. When both are present, `OpenAI API Key` takes precedence. 4. When both are present, `OpenAI API Key` takes precedence.
@@ -168,6 +168,7 @@ pnpm dev
- `OPENAI_API_KEY` one of two - `OPENAI_API_KEY` one of two
- `OPENAI_ACCESS_TOKEN` one of two, `OPENAI_API_KEY` takes precedence when both are present - `OPENAI_ACCESS_TOKEN` one of two, `OPENAI_API_KEY` takes precedence when both are present
- `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set - `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set
- `OPENAI_API_MODEL` optional, available when `OPENAI_API_KEY` is set
- `API_REVERSE_PROXY` optional, available when `OPENAI_ACCESS_TOKEN` is set [Reference](#introduction) - `API_REVERSE_PROXY` optional, available when `OPENAI_ACCESS_TOKEN` is set [Reference](#introduction)
- `AUTH_SECRET_KEY` Access Passwordoptional - `AUTH_SECRET_KEY` Access Passwordoptional
- `TIMEOUT_MS` timeout, in milliseconds, optional - `TIMEOUT_MS` timeout, in milliseconds, optional
@@ -210,6 +211,8 @@ services:
OPENAI_ACCESS_TOKEN: xxxxxx OPENAI_ACCESS_TOKEN: xxxxxx
# api interface url, optional, available when OPENAI_API_KEY is set # api interface url, optional, available when OPENAI_API_KEY is set
OPENAI_API_BASE_URL: xxxx OPENAI_API_BASE_URL: xxxx
# api model, optional, available when OPENAI_API_KEY is set
OPENAI_API_MODEL: xxxx
# reverse proxy, optional # reverse proxy, optional
API_REVERSE_PROXY: xxx API_REVERSE_PROXY: xxx
# access passwordoptional # access passwordoptional
@@ -222,6 +225,7 @@ services:
SOCKS_PROXY_PORT: xxxx SOCKS_PROXY_PORT: xxxx
``` ```
The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`. The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`.
The `OPENAI_API_MODEL` is optional and only used when setting the `OPENAI_API_KEY`.
### Deployment with Railway ### Deployment with Railway
@@ -237,6 +241,7 @@ The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API
| `OPENAI_API_KEY` | Optional | Required for `OpenAI API`. `apiKey` can be obtained from [here](https://platform.openai.com/overview). | | `OPENAI_API_KEY` | Optional | Required for `OpenAI API`. `apiKey` can be obtained from [here](https://platform.openai.com/overview). |
| `OPENAI_ACCESS_TOKEN`| Optional | Required for `Web API`. `accessToken` can be obtained from [here](https://chat.openai.com/api/auth/session).| | `OPENAI_ACCESS_TOKEN`| Optional | Required for `Web API`. `accessToken` can be obtained from [here](https://chat.openai.com/api/auth/session).|
| `OPENAI_API_BASE_URL` | Optional, only for `OpenAI API` | API endpoint. | | `OPENAI_API_BASE_URL` | Optional, only for `OpenAI API` | API endpoint. |
| `OPENAI_API_MODEL` | Optional, only for `OpenAI API` | API model. |
| `API_REVERSE_PROXY` | Optional, only for `Web API` | Reverse proxy address for `Web API`. [Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) | | `API_REVERSE_PROXY` | Optional, only for `Web API` | Reverse proxy address for `Web API`. [Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
| `SOCKS_PROXY_HOST` | Optional, effective with `SOCKS_PROXY_PORT` | Socks proxy. | | `SOCKS_PROXY_HOST` | Optional, effective with `SOCKS_PROXY_PORT` | Socks proxy. |
| `SOCKS_PROXY_PORT` | Optional, effective with `SOCKS_PROXY_HOST` | Socks proxy port. | | `SOCKS_PROXY_PORT` | Optional, effective with `SOCKS_PROXY_HOST` | Socks proxy port. |
@@ -266,7 +271,7 @@ PS: You can also run `pnpm start` directly on the server without packaging.
#### Frontend webpage #### Frontend webpage
1. Modify `VITE_APP_API_BASE_URL` in `.env` at the root directory to your actual backend interface address. 1. Refer to the root directory `.env.example` file content to create `.env` file, modify `VITE_APP_API_BASE_URL` in `.env` at the root directory to your actual backend interface address.
2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service. 2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service.
[Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app) [Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app)

View File

@@ -54,7 +54,7 @@
[查看详情](https://github.com/Chanzhaoyu/chatgpt-web/issues/138) [查看详情](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
切换方式: 切换方式:
1. 进入 `service/.env` 文件 1. 进入 `service/.env.example` 文件,复制内容到 `service/.env` 文件
2. 使用 `OpenAI API Key` 请填写 `OPENAI_API_KEY` 字段 [(获取 apiKey)](https://platform.openai.com/overview) 2. 使用 `OpenAI API Key` 请填写 `OPENAI_API_KEY` 字段 [(获取 apiKey)](https://platform.openai.com/overview)
3. 使用 `Web API` 请填写 `OPENAI_ACCESS_TOKEN` 字段 [(获取 accessToken)](https://chat.openai.com/api/auth/session) 3. 使用 `Web API` 请填写 `OPENAI_ACCESS_TOKEN` 字段 [(获取 accessToken)](https://chat.openai.com/api/auth/session)
4. 同时存在时以 `OpenAI API Key` 优先 4. 同时存在时以 `OpenAI API Key` 优先
@@ -166,6 +166,7 @@ pnpm dev
- `OPENAI_API_KEY` 二选一 - `OPENAI_API_KEY` 二选一
- `OPENAI_ACCESS_TOKEN` 二选一,同时存在时,`OPENAI_API_KEY` 优先 - `OPENAI_ACCESS_TOKEN` 二选一,同时存在时,`OPENAI_API_KEY` 优先
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用 - `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
- `OPENAI_API_MODEL` 可选,设置 `OPENAI_API_KEY` 时可用
- `API_REVERSE_PROXY` 可选,设置 `OPENAI_ACCESS_TOKEN` 时可用 [参考](#介绍) - `API_REVERSE_PROXY` 可选,设置 `OPENAI_ACCESS_TOKEN` 时可用 [参考](#介绍)
- `AUTH_SECRET_KEY` 访问权限密钥,可选 - `AUTH_SECRET_KEY` 访问权限密钥,可选
- `TIMEOUT_MS` 超时,单位毫秒,可选 - `TIMEOUT_MS` 超时,单位毫秒,可选
@@ -208,6 +209,8 @@ services:
OPENAI_ACCESS_TOKEN: xxxxxx OPENAI_ACCESS_TOKEN: xxxxxx
# API接口地址可选设置 OPENAI_API_KEY 时可用 # API接口地址可选设置 OPENAI_API_KEY 时可用
OPENAI_API_BASE_URL: xxxx OPENAI_API_BASE_URL: xxxx
# API模型可选设置 OPENAI_API_KEY 时可用
OPENAI_API_MODEL: xxxx
# 反向代理,可选 # 反向代理,可选
API_REVERSE_PROXY: xxx API_REVERSE_PROXY: xxx
# 访问权限密钥,可选 # 访问权限密钥,可选
@@ -220,6 +223,7 @@ services:
SOCKS_PROXY_PORT: xxxx SOCKS_PROXY_PORT: xxxx
``` ```
- `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用 - `OPENAI_API_BASE_URL` 可选,设置 `OPENAI_API_KEY` 时可用
- `OPENAI_API_MODEL` 可选,设置 `OPENAI_API_KEY` 时可用
### 使用 Railway 部署 ### 使用 Railway 部署
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/yytmgc) [![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/yytmgc)
@@ -234,6 +238,7 @@ services:
| `OPENAI_API_KEY` | `OpenAI API` 二选一 | 使用 `OpenAI API` 所需的 `apiKey` [(获取 apiKey)](https://platform.openai.com/overview) | | `OPENAI_API_KEY` | `OpenAI API` 二选一 | 使用 `OpenAI API` 所需的 `apiKey` [(获取 apiKey)](https://platform.openai.com/overview) |
| `OPENAI_ACCESS_TOKEN` | `Web API` 二选一 | 使用 `Web API` 所需的 `accessToken` [(获取 accessToken)](https://chat.openai.com/api/auth/session) | | `OPENAI_ACCESS_TOKEN` | `Web API` 二选一 | 使用 `Web API` 所需的 `accessToken` [(获取 accessToken)](https://chat.openai.com/api/auth/session) |
| `OPENAI_API_BASE_URL` | 可选,`OpenAI API` 时可用 | `API`接口地址 | | `OPENAI_API_BASE_URL` | 可选,`OpenAI API` 时可用 | `API`接口地址 |
| `OPENAI_API_MODEL` | 可选,`OpenAI API` 时可用 | `API`模型 |
| `API_REVERSE_PROXY` | 可选,`Web API` 时可用 | `Web API` 反向代理地址 [详情](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) | | `API_REVERSE_PROXY` | 可选,`Web API` 时可用 | `Web API` 反向代理地址 [详情](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
| `SOCKS_PROXY_HOST` | 可选,和 `SOCKS_PROXY_PORT` 一起时生效 | Socks代理 | | `SOCKS_PROXY_HOST` | 可选,和 `SOCKS_PROXY_PORT` 一起时生效 | Socks代理 |
| `SOCKS_PROXY_PORT` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理端口 | | `SOCKS_PROXY_PORT` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理端口 |
@@ -261,7 +266,7 @@ PS: 不进行打包,直接在服务器上运行 `pnpm start` 也可
#### 前端网页 #### 前端网页
1、修改根目录下 `.env` `VITE_APP_API_BASE_URL` 为你的实际后端接口地址 1、修改根目录下 `.env` 文件中的 `VITE_APP_API_BASE_URL` 为你的实际后端接口地址
2、根目录下运行以下命令然后将 `dist` 文件夹内的文件复制到你网站服务的根目录下 2、根目录下运行以下命令然后将 `dist` 文件夹内的文件复制到你网站服务的根目录下

View File

@@ -12,6 +12,8 @@ services:
OPENAI_ACCESS_TOKEN: xxxxxx OPENAI_ACCESS_TOKEN: xxxxxx
# API接口地址可选设置 OPENAI_API_KEY 时可用 # API接口地址可选设置 OPENAI_API_KEY 时可用
OPENAI_API_BASE_URL: xxxx OPENAI_API_BASE_URL: xxxx
# API模型可选设置 OPENAI_API_KEY 时可用
OPENAI_API_MODEL: xxxx
# 反向代理,可选 # 反向代理,可选
API_REVERSE_PROXY: xxx API_REVERSE_PROXY: xxx
# 访问权限密钥,可选 # 访问权限密钥,可选
@@ -23,13 +25,13 @@ services:
# Socks代理端口可选和 SOCKS_PROXY_HOST 一起时生效 # Socks代理端口可选和 SOCKS_PROXY_HOST 一起时生效
SOCKS_PROXY_PORT: xxxx SOCKS_PROXY_PORT: xxxx
nginx: nginx:
build: nginx image: nginx:alpine
image: chatgpt/nginx
ports: ports:
- '80:80' - '80:80'
expose: expose:
- '80' - '80'
volumes: volumes:
- ./nginx/html/:/etc/nginx/html/ - ./nginx/html:/usr/share/nginx/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
links: links:
- app - app

View File

@@ -1,10 +0,0 @@
FROM hub.c.163.com/library/nginx
MAINTAINER jo "tionsin@live.com"
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
COPY ./html/ /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

View File

@@ -3,9 +3,9 @@ server {
server_name localhost; server_name localhost;
charset utf-8; charset utf-8;
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = / { location / {
root /usr/share/nginx/html; root /usr/share/nginx/html;
index index.html index.htm; try_files $uri /index.html;
} }
location /api { location /api {

View File

@@ -1,8 +1,7 @@
### docker-compose 部署教程 ### docker-compose 部署教程
- 将打包好的前端文件放到 `nginx/html` 目录下 - 将打包好的前端文件放到 `nginx/html` 目录下
- ```shell - ```shell
# 打包启动 # 启动
docker-compose build
docker-compose up -d docker-compose up -d
``` ```
- ```shell - ```shell

View File

@@ -1,6 +1,6 @@
{ {
"name": "chatgpt-web", "name": "chatgpt-web",
"version": "2.10.0", "version": "2.10.3",
"private": false, "private": false,
"description": "ChatGPT Web", "description": "ChatGPT Web",
"author": "ChenZhaoYu <chenzhaoyu1994@gmail.com>", "author": "ChenZhaoYu <chenzhaoyu1994@gmail.com>",
@@ -58,6 +58,7 @@
"tailwindcss": "^3.2.7", "tailwindcss": "^3.2.7",
"typescript": "~4.9.5", "typescript": "~4.9.5",
"vite": "^4.1.4", "vite": "^4.1.4",
"vite-plugin-pwa": "^0.14.4",
"vue-tsc": "^1.2.0" "vue-tsc": "^1.2.0"
}, },
"lint-staged": { "lint-staged": {

1852
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

BIN
public/pwa-192x192.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.2 KiB

BIN
public/pwa-512x512.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -7,6 +7,9 @@ OPENAI_ACCESS_TOKEN=
# OpenAI API Base URL - https://api.openai.com # OpenAI API Base URL - https://api.openai.com
OPENAI_API_BASE_URL= OPENAI_API_BASE_URL=
# OpenAI API Model - https://platform.openai.com/docs/models
OPENAI_API_MODEL=
# Reverse Proxy # Reverse Proxy
API_REVERSE_PROXY= API_REVERSE_PROXY=

View File

@@ -24,7 +24,7 @@
"common:cleanup": "rimraf node_modules && rimraf pnpm-lock.yaml" "common:cleanup": "rimraf node_modules && rimraf pnpm-lock.yaml"
}, },
"dependencies": { "dependencies": {
"chatgpt": "^5.0.8", "chatgpt": "^5.0.9",
"dotenv": "^16.0.3", "dotenv": "^16.0.3",
"esno": "^0.16.3", "esno": "^0.16.3",
"express": "^4.18.2", "express": "^4.18.2",

View File

@@ -4,7 +4,7 @@ specifiers:
'@antfu/eslint-config': ^0.35.3 '@antfu/eslint-config': ^0.35.3
'@types/express': ^4.17.17 '@types/express': ^4.17.17
'@types/node': ^18.14.6 '@types/node': ^18.14.6
chatgpt: ^5.0.8 chatgpt: ^5.0.9
dotenv: ^16.0.3 dotenv: ^16.0.3
eslint: ^8.35.0 eslint: ^8.35.0
esno: ^0.16.3 esno: ^0.16.3
@@ -17,7 +17,7 @@ specifiers:
typescript: ^4.9.5 typescript: ^4.9.5
dependencies: dependencies:
chatgpt: 5.0.8 chatgpt: 5.0.9
dotenv: 16.0.3 dotenv: 16.0.3
esno: 0.16.3 esno: 0.16.3
express: 4.18.2 express: 4.18.2
@@ -902,8 +902,8 @@ packages:
resolution: {integrity: sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==} resolution: {integrity: sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==}
dev: true dev: true
/chatgpt/5.0.8: /chatgpt/5.0.9:
resolution: {integrity: sha512-Bjh7Y15QIsZ+SkQvbbZGymv1PGxkZ7X1vwqAwvyqaMMhbipU4kxht/GL62VCxhoUCXPwxTfScbFeNFtNldgqaw==} resolution: {integrity: sha512-H0MMegLKcYyYh3LeFO4ubIdJSiSAl4rRjTeXf3KjHfGXDM7QZ1EkiTH9RuIoaNzOm8rJTn4QEhrwBbOIpbalxw==}
engines: {node: '>=14'} engines: {node: '>=14'}
hasBin: true hasBin: true
dependencies: dependencies:

View File

@@ -8,10 +8,8 @@ import { sendResponse } from '../utils'
import type { ApiModel, ChatContext, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types' import type { ApiModel, ChatContext, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types'
const ErrorCodeMessage: Record<string, string> = { const ErrorCodeMessage: Record<string, string> = {
400: '[OpenAI] 模型的最大上下文长度是4096个令牌请减少信息的长度。| This model\'s maximum context length is 4096 tokens.',
401: '[OpenAI] 提供错误的API密钥 | Incorrect API key provided', 401: '[OpenAI] 提供错误的API密钥 | Incorrect API key provided',
403: '[OpenAI] 服务器拒绝访问,请稍后再试 | Server refused to access, please try again later', 403: '[OpenAI] 服务器拒绝访问,请稍后再试 | Server refused to access, please try again later',
429: '[OpenAI] 服务器限流,请稍后再试 | Server was limited, please try again later',
502: '[OpenAI] 错误的网关 | Bad Gateway', 502: '[OpenAI] 错误的网关 | Bad Gateway',
503: '[OpenAI] 服务器繁忙,请稍后再试 | Server is busy, please try again later', 503: '[OpenAI] 服务器繁忙,请稍后再试 | Server is busy, please try again later',
504: '[OpenAI] 网关超时 | Gateway Time-out', 504: '[OpenAI] 网关超时 | Gateway Time-out',
@@ -33,11 +31,14 @@ let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI
// More Info: https://github.com/transitive-bullshit/chatgpt-api // More Info: https://github.com/transitive-bullshit/chatgpt-api
if (process.env.OPENAI_API_KEY) { if (process.env.OPENAI_API_KEY) {
const OPENAI_API_MODEL = process.env.OPENAI_API_MODEL
const model = (typeof OPENAI_API_MODEL === 'string' && OPENAI_API_MODEL.length > 0)
? OPENAI_API_MODEL
: 'gpt-3.5-turbo'
const options: ChatGPTAPIOptions = { const options: ChatGPTAPIOptions = {
apiKey: process.env.OPENAI_API_KEY, apiKey: process.env.OPENAI_API_KEY,
completionParams: { completionParams: { model },
model: 'gpt-3.5-turbo',
},
debug: false, debug: false,
} }
@@ -86,8 +87,8 @@ async function chatReplyProcess(
lastContext?: { conversationId?: string; parentMessageId?: string }, lastContext?: { conversationId?: string; parentMessageId?: string },
process?: (chat: ChatMessage) => void, process?: (chat: ChatMessage) => void,
) { ) {
if (!message) // if (!message)
return sendResponse({ type: 'Fail', message: 'Message is empty' }) // return sendResponse({ type: 'Fail', message: 'Message is empty' })
try { try {
let options: SendMessageOptions = { timeoutMs } let options: SendMessageOptions = { timeoutMs }

View File

@@ -11,8 +11,8 @@ const userInfo = computed(() => userStore.userInfo)
</script> </script>
<template> <template>
<div class="flex items-center"> <div class="flex items-center overflow-hidden">
<div class="w-10 h-10 overflow-hidden rounded-full"> <div class="w-10 h-10 overflow-hidden rounded-full shrink-0">
<template v-if="isString(userInfo.avatar) && userInfo.avatar.length > 0"> <template v-if="isString(userInfo.avatar) && userInfo.avatar.length > 0">
<NAvatar <NAvatar
size="large" size="large"
@@ -25,11 +25,11 @@ const userInfo = computed(() => userStore.userInfo)
<NAvatar size="large" round :src="defaultAvatar" /> <NAvatar size="large" round :src="defaultAvatar" />
</template> </template>
</div> </div>
<div class="ml-2"> <div class="flex-1 min-w-0 ml-2">
<h2 class="font-bold text-md"> <h2 class="overflow-hidden font-bold text-md text-ellipsis whitespace-nowrap">
{{ userInfo.name ?? 'ChenZhaoYu' }} {{ userInfo.name ?? 'ChenZhaoYu' }}
</h2> </h2>
<p class="text-xs text-gray-500"> <p class="overflow-hidden text-xs text-gray-500 text-ellipsis whitespace-nowrap">
<span <span
v-if="isString(userInfo.description) && userInfo.description !== ''" v-if="isString(userInfo.description) && userInfo.description !== ''"
v-html="userInfo.description" v-html="userInfo.description"

View File

@@ -27,6 +27,9 @@ export default {
exportImageConfirm: 'Are you sure to export this chat to png?', exportImageConfirm: 'Are you sure to export this chat to png?',
exportSuccess: 'Export Success', exportSuccess: 'Export Success',
exportFailed: 'Export Failed', exportFailed: 'Export Failed',
usingContext: 'Context Mode',
turnOnContext: 'In the current mode, sending messages will carry previous chat records.',
turnOffContext: 'In the current mode, sending messages will not carry previous chat records.',
deleteMessage: 'Delete Message', deleteMessage: 'Delete Message',
deleteMessageConfirm: 'Are you sure to delete this message?', deleteMessageConfirm: 'Are you sure to delete this message?',
deleteHistoryConfirm: 'Are you sure to clear this history?', deleteHistoryConfirm: 'Are you sure to clear this history?',

View File

@@ -27,6 +27,9 @@ export default {
exportImageConfirm: '是否将会话保存为图片?', exportImageConfirm: '是否将会话保存为图片?',
exportSuccess: '保存成功', exportSuccess: '保存成功',
exportFailed: '保存失败', exportFailed: '保存失败',
usingContext: '上下文模式',
turnOnContext: '当前模式下, 发送消息会携带之前的聊天记录',
turnOffContext: '当前模式下, 发送消息不会携带之前的聊天记录',
deleteMessage: '删除消息', deleteMessage: '删除消息',
deleteMessageConfirm: '是否删除此消息?', deleteMessageConfirm: '是否删除此消息?',
deleteHistoryConfirm: '确定删除此记录?', deleteHistoryConfirm: '确定删除此记录?',

View File

@@ -27,6 +27,9 @@ export default {
exportImageConfirm: '是否將對話儲存為圖片?', exportImageConfirm: '是否將對話儲存為圖片?',
exportSuccess: '儲存成功', exportSuccess: '儲存成功',
exportFailed: '儲存失敗', exportFailed: '儲存失敗',
usingContext: '上下文模式',
turnOnContext: '在當前模式下, 發送訊息會攜帶之前的聊天記錄。',
turnOffContext: '在當前模式下, 發送訊息不會攜帶之前的聊天記錄。',
deleteMessage: '刪除訊息', deleteMessage: '刪除訊息',
deleteMessageConfirm: '是否刪除此訊息?', deleteMessageConfirm: '是否刪除此訊息?',
deleteHistoryConfirm: '確定刪除此紀錄?', deleteHistoryConfirm: '確定刪除此紀錄?',

View File

@@ -165,6 +165,7 @@ export const useChatStore = defineStore('chat-store', {
if (!uuid || uuid === 0) { if (!uuid || uuid === 0) {
if (this.chat.length) { if (this.chat.length) {
this.chat[0].data = [] this.chat[0].data = []
this.history[0].title = 'New Chat'
this.recordState() this.recordState()
} }
return return
@@ -173,6 +174,7 @@ export const useChatStore = defineStore('chat-store', {
const index = this.chat.findIndex(item => item.uuid === uuid) const index = this.chat.findIndex(item => item.uuid === uuid)
if (index !== -1) { if (index !== -1) {
this.chat[index].data = [] this.chat[index].data = []
this.history[index].title = 'New Chat'
this.recordState() this.recordState()
} }
}, },

View File

@@ -6,4 +6,5 @@ body,
body { body {
padding-bottom: constant(safe-area-inset-bottom); padding-bottom: constant(safe-area-inset-bottom);
padding-bottom: env(safe-area-inset-bottom);
} }

View File

@@ -0,0 +1,78 @@
<script lang="ts" setup>
import { computed, nextTick } from 'vue'
import { HoverButton, SvgIcon } from '@/components/common'
import { useAppStore, useChatStore } from '@/store'
interface Props {
usingContext: boolean
}
interface Emit {
(ev: 'export'): void
(ev: 'toggleUsingContext'): void
}
defineProps<Props>()
const emit = defineEmits<Emit>()
const appStore = useAppStore()
const chatStore = useChatStore()
const collapsed = computed(() => appStore.siderCollapsed)
const currentChatHistory = computed(() => chatStore.getChatHistoryByCurrentActive)
function handleUpdateCollapsed() {
appStore.setSiderCollapsed(!collapsed.value)
}
function onScrollToTop() {
const scrollRef = document.querySelector('#scrollRef')
if (scrollRef)
nextTick(() => scrollRef.scrollTop = 0)
}
function handleExport() {
emit('export')
}
function toggleUsingContext() {
emit('toggleUsingContext')
}
</script>
<template>
<header
class="sticky top-0 left-0 right-0 z-30 border-b dark:border-neutral-800 bg-white/80 dark:bg-black/20 backdrop-blur"
>
<div class="relative flex items-center justify-between min-w-0 overflow-hidden h-14">
<div class="flex items-center">
<button
class="flex items-center justify-center w-11 h-11"
@click="handleUpdateCollapsed"
>
<SvgIcon v-if="collapsed" class="text-2xl" icon="ri:align-justify" />
<SvgIcon v-else class="text-2xl" icon="ri:align-right" />
</button>
</div>
<h1
class="flex-1 px-4 pr-6 overflow-hidden cursor-pointer select-none text-ellipsis whitespace-nowrap"
@dblclick="onScrollToTop"
>
{{ currentChatHistory?.title ?? '' }}
</h1>
<div class="flex items-center space-x-2">
<HoverButton @click="toggleUsingContext">
<span class="text-xl" :class="{ 'text-[#4b9e5f]': usingContext, 'text-[#a8071a]': !usingContext }">
<SvgIcon icon="ri:chat-history-line" />
</span>
</HoverButton>
<HoverButton @click="handleExport">
<span class="text-xl text-[#4f555e] dark:text-white">
<SvgIcon icon="ri:download-2-line" />
</span>
</HoverButton>
</div>
</div>
</header>
</template>

View File

@@ -38,7 +38,7 @@ const wrapClass = computed(() => {
'text-wrap', 'text-wrap',
'min-w-[20px]', 'min-w-[20px]',
'rounded-md', 'rounded-md',
isMobile.value ? 'p-2' : 'p-3', isMobile.value ? 'p-2' : 'px-3 py-2',
props.inversion ? 'bg-[#d2f9d1]' : 'bg-[#f4f6f8]', props.inversion ? 'bg-[#d2f9d1]' : 'bg-[#f4f6f8]',
props.inversion ? 'dark:bg-[#a1dc95]' : 'dark:bg-[#1e1e20]', props.inversion ? 'dark:bg-[#a1dc95]' : 'dark:bg-[#1e1e20]',
{ 'text-red-500': props.error }, { 'text-red-500': props.error },

View File

@@ -7,6 +7,7 @@ interface ScrollReturn {
scrollRef: Ref<ScrollElement> scrollRef: Ref<ScrollElement>
scrollToBottom: () => Promise<void> scrollToBottom: () => Promise<void>
scrollToTop: () => Promise<void> scrollToTop: () => Promise<void>
scrollToBottomIfAtBottom: () => Promise<void>
} }
export function useScroll(): ScrollReturn { export function useScroll(): ScrollReturn {
@@ -24,9 +25,20 @@ export function useScroll(): ScrollReturn {
scrollRef.value.scrollTop = 0 scrollRef.value.scrollTop = 0
} }
const scrollToBottomIfAtBottom = async () => {
await nextTick()
if (scrollRef.value) {
const threshold = 50 // 阈值,表示滚动条到底部的距离阈值
const distanceToBottom = scrollRef.value.scrollHeight - scrollRef.value.scrollTop - scrollRef.value.clientHeight
if (distanceToBottom <= threshold)
scrollRef.value.scrollTop = scrollRef.value.scrollHeight
}
}
return { return {
scrollRef, scrollRef,
scrollToBottom, scrollToBottom,
scrollToTop, scrollToTop,
scrollToBottomIfAtBottom,
} }
} }

View File

@@ -0,0 +1,21 @@
import { ref } from 'vue'
import { useMessage } from 'naive-ui'
import { t } from '@/locales'
export function useUsingContext() {
const ms = useMessage()
const usingContext = ref<boolean>(true)
function toggleUsingContext() {
usingContext.value = !usingContext.value
if (usingContext.value)
ms.success(t('chat.turnOnContext'))
else
ms.warning(t('chat.turnOffContext'))
}
return {
usingContext,
toggleUsingContext,
}
}

View File

@@ -7,6 +7,8 @@ import { Message } from './components'
import { useScroll } from './hooks/useScroll' import { useScroll } from './hooks/useScroll'
import { useChat } from './hooks/useChat' import { useChat } from './hooks/useChat'
import { useCopyCode } from './hooks/useCopyCode' import { useCopyCode } from './hooks/useCopyCode'
import { useUsingContext } from './hooks/useUsingContext'
import HeaderComponent from './components/Header/index.vue'
import { HoverButton, SvgIcon } from '@/components/common' import { HoverButton, SvgIcon } from '@/components/common'
import { useBasicLayout } from '@/hooks/useBasicLayout' import { useBasicLayout } from '@/hooks/useBasicLayout'
import { useChatStore } from '@/store' import { useChatStore } from '@/store'
@@ -15,6 +17,8 @@ import { t } from '@/locales'
let controller = new AbortController() let controller = new AbortController()
const openLongReply = import.meta.env.VITE_GLOB_OPEN_LONG_REPLY === 'true'
const route = useRoute() const route = useRoute()
const dialog = useDialog() const dialog = useDialog()
const ms = useMessage() const ms = useMessage()
@@ -22,9 +26,11 @@ const ms = useMessage()
const chatStore = useChatStore() const chatStore = useChatStore()
useCopyCode() useCopyCode()
const { isMobile } = useBasicLayout() const { isMobile } = useBasicLayout()
const { addChat, updateChat, updateChatSome, getChatByUuidAndIndex } = useChat() const { addChat, updateChat, updateChatSome, getChatByUuidAndIndex } = useChat()
const { scrollRef, scrollToBottom } = useScroll() const { scrollRef, scrollToBottom } = useScroll()
const { usingContext, toggleUsingContext } = useUsingContext()
const { uuid } = route.params as { uuid: string } const { uuid } = route.params as { uuid: string }
@@ -39,7 +45,7 @@ function handleSubmit() {
} }
async function onConversation() { async function onConversation() {
const message = prompt.value let message = prompt.value
if (loading.value) if (loading.value)
return return
@@ -68,7 +74,7 @@ async function onConversation() {
let options: Chat.ConversationRequest = {} let options: Chat.ConversationRequest = {}
const lastContext = conversationList.value[conversationList.value.length - 1]?.conversationOptions const lastContext = conversationList.value[conversationList.value.length - 1]?.conversationOptions
if (lastContext) if (lastContext && usingContext.value)
options = { ...lastContext } options = { ...lastContext }
addChat( addChat(
@@ -86,41 +92,53 @@ async function onConversation() {
scrollToBottom() scrollToBottom()
try { try {
await fetchChatAPIProcess<Chat.ConversationResponse>({ let lastText = ''
prompt: message, const fetchChatAPIOnce = async () => {
options, await fetchChatAPIProcess<Chat.ConversationResponse>({
signal: controller.signal, prompt: message,
onDownloadProgress: ({ event }) => { options,
const xhr = event.target signal: controller.signal,
const { responseText } = xhr onDownloadProgress: ({ event }) => {
// Always process the final line const xhr = event.target
const lastIndex = responseText.lastIndexOf('\n') const { responseText } = xhr
let chunk = responseText // Always process the final line
if (lastIndex !== -1) const lastIndex = responseText.lastIndexOf('\n')
chunk = responseText.substring(lastIndex) let chunk = responseText
try { if (lastIndex !== -1)
const data = JSON.parse(chunk) chunk = responseText.substring(lastIndex)
updateChat( try {
+uuid, const data = JSON.parse(chunk)
dataSources.value.length - 1, updateChat(
{ +uuid,
dateTime: new Date().toLocaleString(), dataSources.value.length - 1,
text: data.text ?? '', {
inversion: false, dateTime: new Date().toLocaleString(),
error: false, text: lastText + data.text ?? '',
loading: false, inversion: false,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id }, error: false,
requestOptions: { prompt: message, options: { ...options } }, loading: false,
}, conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
) requestOptions: { prompt: message, options: { ...options } },
scrollToBottom() },
} )
catch (error) {
if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
options.parentMessageId = data.id
lastText = data.text
message = ''
return fetchChatAPIOnce()
}
scrollToBottom()
}
catch (error) {
// //
} }
}, },
}) })
scrollToBottom() }
await fetchChatAPIOnce()
} }
catch (error: any) { catch (error: any) {
const errorMessage = error?.message ?? t('common.wrong') const errorMessage = error?.message ?? t('common.wrong')
@@ -180,7 +198,7 @@ async function onRegenerate(index: number) {
const { requestOptions } = dataSources.value[index] const { requestOptions } = dataSources.value[index]
const message = requestOptions?.prompt ?? '' let message = requestOptions?.prompt ?? ''
let options: Chat.ConversationRequest = {} let options: Chat.ConversationRequest = {}
@@ -204,39 +222,50 @@ async function onRegenerate(index: number) {
) )
try { try {
await fetchChatAPIProcess<Chat.ConversationResponse>({ let lastText = ''
prompt: message, const fetchChatAPIOnce = async () => {
options, await fetchChatAPIProcess<Chat.ConversationResponse>({
signal: controller.signal, prompt: message,
onDownloadProgress: ({ event }) => { options,
const xhr = event.target signal: controller.signal,
const { responseText } = xhr onDownloadProgress: ({ event }) => {
// Always process the final line const xhr = event.target
const lastIndex = responseText.lastIndexOf('\n') const { responseText } = xhr
let chunk = responseText // Always process the final line
if (lastIndex !== -1) const lastIndex = responseText.lastIndexOf('\n')
chunk = responseText.substring(lastIndex) let chunk = responseText
try { if (lastIndex !== -1)
const data = JSON.parse(chunk) chunk = responseText.substring(lastIndex)
updateChat( try {
+uuid, const data = JSON.parse(chunk)
index, updateChat(
{ +uuid,
dateTime: new Date().toLocaleString(), index,
text: data.text ?? '', {
inversion: false, dateTime: new Date().toLocaleString(),
error: false, text: lastText + data.text ?? '',
loading: false, inversion: false,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id }, error: false,
requestOptions: { prompt: message, ...options }, loading: false,
}, conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
) requestOptions: { prompt: message, ...options },
} },
catch (error) { )
//
} if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
}, options.parentMessageId = data.id
}) lastText = data.text
message = ''
return fetchChatAPIOnce()
}
}
catch (error) {
//
}
},
})
}
await fetchChatAPIOnce()
} }
catch (error: any) { catch (error: any) {
if (error.message === 'canceled') { if (error.message === 'canceled') {
@@ -284,7 +313,9 @@ function handleExport() {
try { try {
d.loading = true d.loading = true
const ele = document.getElementById('image-wrapper') const ele = document.getElementById('image-wrapper')
const canvas = await html2canvas(ele as HTMLDivElement) const canvas = await html2canvas(ele as HTMLDivElement, {
useCORS: true,
})
const imgUrl = canvas.toDataURL('image/png') const imgUrl = canvas.toDataURL('image/png')
const tempLink = document.createElement('a') const tempLink = document.createElement('a')
tempLink.style.display = 'none' tempLink.style.display = 'none'
@@ -373,16 +404,10 @@ const buttonDisabled = computed(() => {
return loading.value || !prompt.value || prompt.value.trim() === '' return loading.value || !prompt.value || prompt.value.trim() === ''
}) })
const wrapClass = computed(() => {
if (isMobile.value)
return ['pt-14']
return []
})
const footerClass = computed(() => { const footerClass = computed(() => {
let classes = ['p-4'] let classes = ['p-4']
if (isMobile.value) if (isMobile.value)
classes = ['sticky', 'left-0', 'bottom-0', 'right-0', 'p-2', 'pr-4', 'overflow-hidden'] classes = ['sticky', 'left-0', 'bottom-0', 'right-0', 'p-2', 'pr-3', 'overflow-hidden']
return classes return classes
}) })
@@ -397,14 +422,24 @@ onUnmounted(() => {
</script> </script>
<template> <template>
<div class="flex flex-col w-full h-full" :class="wrapClass"> <div class="flex flex-col w-full h-full">
<HeaderComponent
v-if="isMobile"
:using-context="usingContext"
@export="handleExport"
@toggle-using-context="toggleUsingContext"
/>
<main class="flex-1 overflow-hidden"> <main class="flex-1 overflow-hidden">
<div <div
id="scrollRef" id="scrollRef"
ref="scrollRef" ref="scrollRef"
class="h-full overflow-hidden overflow-y-auto" class="h-full overflow-hidden overflow-y-auto"
> >
<div id="image-wrapper" class="w-full max-w-screen-xl m-auto" :class="[isMobile ? 'p-2' : 'p-4']"> <div
id="image-wrapper"
class="w-full max-w-screen-xl m-auto dark:bg-[#101014]"
:class="[isMobile ? 'p-2' : 'p-4']"
>
<template v-if="!dataSources.length"> <template v-if="!dataSources.length">
<div class="flex items-center justify-center mt-4 text-center text-neutral-300"> <div class="flex items-center justify-center mt-4 text-center text-neutral-300">
<SvgIcon icon="ri:bubble-chart-fill" class="mr-2 text-3xl" /> <SvgIcon icon="ri:bubble-chart-fill" class="mr-2 text-3xl" />
@@ -445,11 +480,16 @@ onUnmounted(() => {
<SvgIcon icon="ri:delete-bin-line" /> <SvgIcon icon="ri:delete-bin-line" />
</span> </span>
</HoverButton> </HoverButton>
<HoverButton @click="handleExport"> <HoverButton v-if="!isMobile" @click="handleExport">
<span class="text-xl text-[#4f555e] dark:text-white"> <span class="text-xl text-[#4f555e] dark:text-white">
<SvgIcon icon="ri:download-2-line" /> <SvgIcon icon="ri:download-2-line" />
</span> </span>
</HoverButton> </HoverButton>
<HoverButton v-if="!isMobile" @click="toggleUsingContext">
<span class="text-xl" :class="{ 'text-[#4b9e5f]': usingContext, 'text-[#a8071a]': !usingContext }">
<SvgIcon icon="ri:chat-history-line" />
</span>
</HoverButton>
<NInput <NInput
v-model:value="prompt" v-model:value="prompt"
type="textarea" type="textarea"

View File

@@ -3,7 +3,6 @@ import { computed } from 'vue'
import { NLayout, NLayoutContent } from 'naive-ui' import { NLayout, NLayoutContent } from 'naive-ui'
import { useRouter } from 'vue-router' import { useRouter } from 'vue-router'
import Sider from './sider/index.vue' import Sider from './sider/index.vue'
import Header from './header/index.vue'
import Permission from './Permission.vue' import Permission from './Permission.vue'
import { useBasicLayout } from '@/hooks/useBasicLayout' import { useBasicLayout } from '@/hooks/useBasicLayout'
import { useAppStore, useAuthStore, useChatStore } from '@/store' import { useAppStore, useAuthStore, useChatStore } from '@/store'
@@ -40,7 +39,6 @@ const getContainerClass = computed(() => {
<div class="h-full overflow-hidden" :class="getMobileClass"> <div class="h-full overflow-hidden" :class="getMobileClass">
<NLayout class="z-40 transition" :class="getContainerClass" has-sider> <NLayout class="z-40 transition" :class="getContainerClass" has-sider>
<Sider /> <Sider />
<Header v-if="isMobile" />
<NLayoutContent class="h-full"> <NLayoutContent class="h-full">
<RouterView v-slot="{ Component, route }"> <RouterView v-slot="{ Component, route }">
<component :is="Component" :key="route.fullPath" /> <component :is="Component" :key="route.fullPath" />

View File

@@ -1,55 +0,0 @@
<script lang="ts" setup>
import { computed, nextTick } from 'vue'
import { SvgIcon } from '@/components/common'
import { useAppStore, useChatStore } from '@/store'
const appStore = useAppStore()
const chatStore = useChatStore()
const collapsed = computed(() => appStore.siderCollapsed)
const currentChatHistory = computed(() => chatStore.getChatHistoryByCurrentActive)
function handleUpdateCollapsed() {
appStore.setSiderCollapsed(!collapsed.value)
}
function onScrollToTop() {
const scrollRef = document.querySelector('#scrollRef')
if (scrollRef)
nextTick(() => scrollRef.scrollTop = 0)
}
function onScrollToBottom() {
const scrollRef = document.querySelector('#scrollRef')
if (scrollRef)
nextTick(() => scrollRef.scrollTop = scrollRef.scrollHeight)
}
</script>
<template>
<header
class="fixed top-0 left-0 right-0 z-30 border-b dark:border-neutral-800 bg-white/80 dark:bg-black/20 backdrop-blur"
>
<div class="relative flex items-center justify-between h-14">
<button
class="flex items-center justify-center w-11 h-11"
@click="handleUpdateCollapsed"
>
<SvgIcon v-if="collapsed" class="text-2xl" icon="ri:align-justify" />
<SvgIcon v-else class="text-2xl" icon="ri:align-right" />
</button>
<h1
class="flex-1 px-4 overflow-hidden text-center cursor-pointer select-none text-ellipsis whitespace-nowrap"
@dblclick="onScrollToTop"
>
{{ currentChatHistory?.title ?? '' }}
</h1>
<button
class="flex items-center justify-center w-11 h-11"
@click="onScrollToBottom"
>
<SvgIcon class="text-2xl" icon="ri:arrow-down-s-line" />
</button>
</div>
</header>
</template>

View File

@@ -9,8 +9,9 @@ const show = ref(false)
<template> <template>
<footer class="flex items-center justify-between min-w-0 p-4 overflow-hidden border-t dark:border-neutral-800"> <footer class="flex items-center justify-between min-w-0 p-4 overflow-hidden border-t dark:border-neutral-800">
<UserAvatar /> <div class="flex-1 flex-shrink-0 overflow-hidden">
<UserAvatar />
</div>
<HoverButton :tooltip="$t('setting.setting')" @click="show = true"> <HoverButton :tooltip="$t('setting.setting')" @click="show = true">
<span class="text-xl text-[#4f555e] dark:text-white"> <span class="text-xl text-[#4f555e] dark:text-white">
<SvgIcon icon="ri:settings-4-line" /> <SvgIcon icon="ri:settings-4-line" />

View File

@@ -1,6 +1,7 @@
import path from 'path' import path from 'path'
import { defineConfig, loadEnv } from 'vite' import { defineConfig, loadEnv } from 'vite'
import vue from '@vitejs/plugin-vue' import vue from '@vitejs/plugin-vue'
import { VitePWA } from 'vite-plugin-pwa'
export default defineConfig((env) => { export default defineConfig((env) => {
const viteEnv = loadEnv(env.mode, process.cwd()) as unknown as ImportMetaEnv const viteEnv = loadEnv(env.mode, process.cwd()) as unknown as ImportMetaEnv
@@ -11,7 +12,20 @@ export default defineConfig((env) => {
'@': path.resolve(process.cwd(), 'src'), '@': path.resolve(process.cwd(), 'src'),
}, },
}, },
plugins: [vue()], plugins: [
vue(),
VitePWA({
injectRegister: 'auto',
manifest: {
name: 'chatGPT',
short_name: 'chatGPT',
icons: [
{ src: 'pwa-192x192.png', sizes: '192x192', type: 'image/png' },
{ src: 'pwa-512x512.png', sizes: '512x512', type: 'image/png' },
],
},
}),
],
server: { server: {
host: '0.0.0.0', host: '0.0.0.0',
port: 1002, port: 1002,