Compare commits

1 Commits

Author SHA1 Message Date
ChenZhaoYu
88740c13f0 test: output 2023-03-28 16:11:52 +08:00
21 changed files with 2232 additions and 2101 deletions

View File

@@ -1,2 +0,0 @@
docker-compose
kubernetes

View File

@@ -1,22 +0,0 @@
name: Close inactive issues
on:
schedule:
- cron: '30 1 * * *'
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 10
days-before-issue-close: 5
stale-issue-label: stale
stale-issue-message: This issue is stale because it has been open for 10 days with no activity.
close-issue-message: This issue was closed because it has been inactive for 5 days since being marked as stale.
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,33 +1,3 @@
## v2.10.9
`2023-04-03`
> 更新默认 `accessToken` 反代地址为 [[acheong08](https://github.com/acheong08)] 的 `https://bypass.churchless.tech/api/conversation`
## Enhancement
- 添加 `socks5` 代理认证 [[yimiaoxiehou](https://github.com/Chanzhaoyu/chatgpt-web/pull/999)]
- 添加 `socks` 代理用户名密码的配置 [[hank-cp](https://github.com/Chanzhaoyu/chatgpt-web/pull/890)]
- 添加可选日志打印 [[zcong1993](https://github.com/Chanzhaoyu/chatgpt-web/pull/1041)]
- 更新侧边栏按钮本地化[[simonwu53](https://github.com/Chanzhaoyu/chatgpt-web/pull/911)]
- 优化代码块滚动条高度 [[Fog3211](https://github.com/Chanzhaoyu/chatgpt-web/pull/1153)]
## BugFix
- 修复 `PWA` 问题 [[bingo235](https://github.com/Chanzhaoyu/chatgpt-web/pull/807)]
- 修复 `ESM` 错误 [[kidonng](https://github.com/Chanzhaoyu/chatgpt-web/pull/826)]
- 修复反向代理开启时限流失效的问题 [[gitgitgogogo](https://github.com/Chanzhaoyu/chatgpt-web/pull/863)]
- 修复 `docker` 构建时 `.env` 可能被忽略的问题 [[zaiMoe](https://github.com/Chanzhaoyu/chatgpt-web/pull/877)]
- 修复导出异常错误 [[KingTwinkle](https://github.com/Chanzhaoyu/chatgpt-web/pull/938)]
- 修复空值异常 [[vchenpeng](https://github.com/Chanzhaoyu/chatgpt-web/pull/1103)]
- 移动端上的体验问题
## Other
- `Docker` 容器名字名义 [[LOVECHEN](https://github.com/Chanzhaoyu/chatgpt-web/pull/1035)]
- `kubernetes` 部署配置 [[CaoYunzhou](https://github.com/Chanzhaoyu/chatgpt-web/pull/1001)]
- 感谢 [[assassinliujie](https://github.com/Chanzhaoyu/chatgpt-web/pull/962)] 和 [[puppywang](https://github.com/Chanzhaoyu/chatgpt-web/pull/1017)] 的某些贡献
- 更新 `kubernetes/deploy.yaml` [[idawnwon](https://github.com/Chanzhaoyu/chatgpt-web/pull/1085)]
- 文档更新 [[#yi-ge](https://github.com/Chanzhaoyu/chatgpt-web/pull/883)]
- 文档更新 [[weifeng12x](https://github.com/Chanzhaoyu/chatgpt-web/pull/880)]
- 依赖更新
## v2.10.8
`2023-03-23`

337
README.en.md Normal file
View File

@@ -0,0 +1,337 @@
# ChatGPT Web
<div style="font-size: 1.5rem;">
<a href="./README.md">中文</a> |
<a href="./README.en.md">English</a>
</div>
</br>
> Disclaimer: This project is only released on GitHub, under the MIT License, free and for open-source learning purposes. There will be no account selling, paid services, discussion groups, or forums. Beware of fraud.
![cover](./docs/c1.png)
![cover2](./docs/c2.png)
- [ChatGPT Web](#chatgpt-web)
- [Introduction](#introduction)
- [Roadmap](#roadmap)
- [Prerequisites](#prerequisites)
- [Node](#node)
- [PNPM](#pnpm)
- [Fill in the Keys](#fill-in-the-keys)
- [Install Dependencies](#install-dependencies)
- [Backend](#backend)
- [Frontend](#frontend)
- [Run in Test Environment](#run-in-test-environment)
- [Backend Service](#backend-service)
- [Frontend Webpage](#frontend-webpage)
- [Packaging](#packaging)
- [Using Docker](#using-docker)
- [Docker Parameter Example](#docker-parameter-example)
- [Docker Build \& Run](#docker-build--run)
- [Docker Compose](#docker-compose)
- [Deployment with Railway](#deployment-with-railway)
- [Railway Environment Variables](#railway-environment-variables)
- [Manual packaging](#manual-packaging)
- [Backend service](#backend-service-1)
- [Frontend webpage](#frontend-webpage-1)
- [Frequently Asked Questions](#frequently-asked-questions)
- [Contributing](#contributing)
- [Sponsorship](#sponsorship)
- [License](#license)
## Introduction
Supports dual models, provides two unofficial `ChatGPT API` methods:
| Method | Free? | Reliability | Quality |
| --------------------------------------------- | ------ | ----------- | ------- |
| `ChatGPTAPI(gpt-3.5-turbo-0301)` | No | Reliable | Relatively clumsy |
| `ChatGPTUnofficialProxyAPI(Web accessToken)` | Yes | Relatively unreliable | Smart |
Comparison:
1. `ChatGPTAPI` uses `gpt-3.5-turbo-0301` to simulate `ChatGPT` through the official `OpenAI` completion `API` (the most reliable method, but it is not free and does not use models specifically tuned for chat).
2. `ChatGPTUnofficialProxyAPI` accesses `ChatGPT`'s backend `API` via an unofficial proxy server to bypass `Cloudflare` (uses the real `ChatGPT`, is very lightweight, but depends on third-party servers and has rate limits).
[Details](https://github.com/Chanzhaoyu/chatgpt-web/issues/138)
Switching Methods:
1. Go to the `service/.env.example` file and copy the contents to the `service/.env` file.
2. For `OpenAI API Key`, fill in the `OPENAI_API_KEY` field [(Get apiKey)](https://platform.openai.com/overview).
3. For `Web API`, fill in the `OPENAI_ACCESS_TOKEN` field [(Get accessToken)](https://chat.openai.com/api/auth/session).
4. When both are present, `OpenAI API Key` takes precedence.
Reverse Proxy:
Available when using `ChatGPTUnofficialProxyAPI`.[Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy)
```shell
# service/.env
API_REVERSE_PROXY=
```
Environment Variables:
For all parameter variables, check [here](#docker-parameter-example) or see:
```
/service/.env
```
## Roadmap
[✓] Dual models
[✓] Multiple session storage and context logic
[✓] Formatting and beautifying code-like message types
[✓] Access rights control
[✓] Data import and export
[✓] Save message to local image
[✓] Multilingual interface
[✓] Interface themes
[✗] More...
## Prerequisites
### Node
`node` requires version `^16 || ^18` (`node >= 14` requires installation of [fetch polyfill](https://github.com/developit/unfetch#usage-as-a-polyfill)), and multiple local `node` versions can be managed using [nvm](https://github.com/nvm-sh/nvm).
```shell
node -v
```
### PNPM
If you have not installed `pnpm` before:
```shell
npm install pnpm -g
```
### Fill in the Keys
Get `Openai Api Key` or `accessToken` and fill in the local environment variables [jump](#introduction)
```
# service/.env file
# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=
# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=
```
## Install Dependencies
> To make it easier for `backend developers` to understand, we did not use the front-end `workspace` mode, but stored it in different folders. If you only need to do secondary development of the front-end page, delete the `service` folder.
### Backend
Enter the `/service` folder and run the following command
```shell
pnpm install
```
### Frontend
Run the following command in the root directory
```shell
pnpm bootstrap
```
## Run in Test Environment
### Backend Service
Enter the `/service` folder and run the following command
```shell
pnpm start
```
### Frontend Webpage
Run the following command in the root directory
```shell
pnpm dev
```
## Packaging
### Using Docker
#### Docker Parameter Example
- `OPENAI_API_KEY` one of two
- `OPENAI_ACCESS_TOKEN` one of two, `OPENAI_API_KEY` takes precedence when both are present
- `OPENAI_API_BASE_URL` optional, available when `OPENAI_API_KEY` is set
- `OPENAI_API_MODEL` optional, available when `OPENAI_API_KEY` is set
- `API_REVERSE_PROXY` optional, available when `OPENAI_ACCESS_TOKEN` is set [Reference](#introduction)
- `AUTH_SECRET_KEY` Access Passwordoptional
- `TIMEOUT_MS` timeout, in milliseconds, optional
- `SOCKS_PROXY_HOST` optional, effective with SOCKS_PROXY_PORT
- `SOCKS_PROXY_PORT` optional, effective with SOCKS_PROXY_HOST
- `HTTPS_PROXY` optional, support httphttps, socks5
- `ALL_PROXY` optional, support httphttps, socks5
![docker](./docs/docker.png)
#### Docker Build & Run
```bash
docker build -t chatgpt-web .
# foreground operation
docker run --name chatgpt-web --rm -it -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# background operation
docker run --name chatgpt-web -d -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# running address
http://localhost:3002/
```
#### Docker Compose
[Hub Address](https://hub.docker.com/repository/docker/chenzhaoyu94/chatgpt-web/general)
```yml
version: '3'
services:
app:
image: chenzhaoyu94/chatgpt-web # always use latest, pull the tag image again when updating
ports:
- 127.0.0.1:3002:3002
environment:
# one of two
OPENAI_API_KEY: xxxxxx
# one of two
OPENAI_ACCESS_TOKEN: xxxxxx
# api interface url, optional, available when OPENAI_API_KEY is set
OPENAI_API_BASE_URL: xxxx
# api model, optional, available when OPENAI_API_KEY is set
OPENAI_API_MODEL: xxxx
# reverse proxy, optional
API_REVERSE_PROXY: xxx
# access passwordoptional
AUTH_SECRET_KEY: xxx
# timeout, in milliseconds, optional
TIMEOUT_MS: 60000
# socks proxy, optional, effective with SOCKS_PROXY_PORT
SOCKS_PROXY_HOST: xxxx
# socks proxy port, optional, effective with SOCKS_PROXY_HOST
SOCKS_PROXY_PORT: xxxx
# HTTPS Proxyoptional, support http, https, socks5
HTTPS_PROXY: http://xxx:7890
```
The `OPENAI_API_BASE_URL` is optional and only used when setting the `OPENAI_API_KEY`.
The `OPENAI_API_MODEL` is optional and only used when setting the `OPENAI_API_KEY`.
### Deployment with Railway
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/new/template/yytmgc)
#### Railway Environment Variables
| Environment Variable | Required | Description |
| -------------------- | -------- | ------------------------------------------------------------------------------------------------- |
| `PORT` | Required | Default: `3002` |
| `AUTH_SECRET_KEY` | Optional | access password |
| `TIMEOUT_MS` | Optional | Timeout in milliseconds |
| `OPENAI_API_KEY` | Optional | Required for `OpenAI API`. `apiKey` can be obtained from [here](https://platform.openai.com/overview). |
| `OPENAI_ACCESS_TOKEN`| Optional | Required for `Web API`. `accessToken` can be obtained from [here](https://chat.openai.com/api/auth/session).|
| `OPENAI_API_BASE_URL` | Optional, only for `OpenAI API` | API endpoint. |
| `OPENAI_API_MODEL` | Optional, only for `OpenAI API` | API model. |
| `API_REVERSE_PROXY` | Optional, only for `Web API` | Reverse proxy address for `Web API`. [Details](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
| `SOCKS_PROXY_HOST` | Optional, effective with `SOCKS_PROXY_PORT` | Socks proxy. |
| `SOCKS_PROXY_PORT` | Optional, effective with `SOCKS_PROXY_HOST` | Socks proxy port. |
| `HTTPS_PROXY` | Optional | HTTPS Proxy. |
| `ALL_PROXY` | Optional | ALL Proxy. |
> Note: Changing environment variables in Railway will cause re-deployment.
### Manual packaging
#### Backend service
> If you don't need the `node` interface of this project, you can skip the following steps.
Copy the `service` folder to a server that has a `node` service environment.
```shell
# Install
pnpm install
# Build
pnpm build
# Run
pnpm prod
```
PS: You can also run `pnpm start` directly on the server without packaging.
#### Frontend webpage
1. Refer to the root directory `.env.example` file content to create `.env` file, modify `VITE_GLOB_API_URL` in `.env` at the root directory to your actual backend interface address.
2. Run the following command in the root directory and then copy the files in the `dist` folder to the root directory of your website service.
[Reference information](https://cn.vitejs.dev/guide/static-deploy.html#building-the-app)
```shell
pnpm build
```
## Frequently Asked Questions
Q: Why does Git always report an error when committing?
A: Because there is submission information verification, please follow the [Commit Guidelines](./CONTRIBUTING.en.md).
Q: Where to change the request interface if only the frontend page is used?
A: The `VITE_GLOB_API_URL` field in the `.env` file at the root directory.
Q: All red when saving the file?
A: For `vscode`, please install the recommended plug-in of the project or manually install the `Eslint` plug-in.
Q: Why doesn't the frontend have a typewriter effect?
A: One possible reason is that after Nginx reverse proxying, buffering is turned on, and Nginx will try to buffer a certain amount of data from the backend before sending it to the browser. Please try adding `proxy_buffering off;` after the reverse proxy parameter and then reloading Nginx. Other web server configurations are similar.
Q: The content returned is incomplete?
A: There is a length limit for the content returned by the API each time. You can modify the `VITE_GLOB_OPEN_LONG_REPLY` field in the `.env` file under the root directory, set it to `true`, and rebuild the front-end to enable the long reply feature, which can return the full content. It should be noted that using this feature may bring more API usage fees.
## Contributing
Please read the [Contributing Guidelines](./CONTRIBUTING.en.md) before contributing.
Thanks to all the contributors!
<a href="https://github.com/Chanzhaoyu/chatgpt-web/graphs/contributors">
<img src="https://contrib.rocks/image?repo=Chanzhaoyu/chatgpt-web" />
</a>
## Sponsorship
If you find this project helpful and circumstances permit, you can give me a little support. Thank you very much for your support~
<div style="display: flex; gap: 20px;">
<div style="text-align: center">
<img style="max-width: 100%" src="./docs/wechat.png" alt="WeChat" />
<p>WeChat Pay</p>
</div>
<div style="text-align: center">
<img style="max-width: 100%" src="./docs/alipay.png" alt="Alipay" />
<p>Alipay</p>
</div>
</div>
## License
MIT © [ChenZhaoYu](./license)

View File

@@ -1,5 +1,11 @@
# ChatGPT Web
<div style="font-size: 1.5rem;">
<a href="./README.md">中文</a> |
<a href="./README.en.md">English</a>
</div>
</br>
> 声明:此项目只发布于 Github基于 MIT 协议,免费且作为开源学习使用。并且不会有任何形式的卖号、付费服务、讨论群、讨论组等行为。谨防受骗。
![cover](./docs/c1.png)
@@ -43,8 +49,8 @@
| `ChatGPTUnofficialProxyAPI(网页 accessToken)` | 是 | 相对不可靠 | 聪明 |
对比:
1. `ChatGPTAPI` 使用 `gpt-3.5-turbo` 通过 `OpenAI` 官方 `API` 调用 `ChatGPT`
2. `ChatGPTUnofficialProxyAPI` 使用非官方代理服务器访问 `ChatGPT` 的后端`API`,绕过`Cloudflare`(依赖于第三方服务器,并且有速率限制)
1. `ChatGPTAPI` 使用 `gpt-3.5-turbo-0301` 通过官方`OpenAI`补全`API`模拟`ChatGPT`(最稳健的方法,但它不是免费的,并且没有使用针对聊天进行微调的模型)
2. `ChatGPTUnofficialProxyAPI` 使用非官方代理服务器访问 `ChatGPT` 的后端`API`,绕过`Cloudflare`使用真实的的`ChatGPT`,非常轻量级,但依赖于第三方服务器,并且有速率限制)
警告:
1. 你应该首先使用 `API` 方式
@@ -155,7 +161,6 @@ pnpm dev
- `OPENAI_API_KEY``OPENAI_ACCESS_TOKEN` 二选一
- `OPENAI_API_MODEL` 设置模型,可选,默认:`gpt-3.5-turbo`
- `OPENAI_API_BASE_URL` 设置接口地址,可选,默认:`https://api.openai.com`
- `OPENAI_API_DISABLE_DEBUG` 设置接口关闭 debug 日志可选默认empty 不关闭
`ACCESS_TOKEN` 可用:
@@ -253,8 +258,6 @@ services:
| `API_REVERSE_PROXY` | 可选,`Web API` 时可用 | `Web API` 反向代理地址 [详情](https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy) |
| `SOCKS_PROXY_HOST` | 可选,和 `SOCKS_PROXY_PORT` 一起时生效 | Socks代理 |
| `SOCKS_PROXY_PORT` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理端口 |
| `SOCKS_PROXY_USERNAME` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理用户名 |
| `SOCKS_PROXY_PASSWORD` | 可选,和 `SOCKS_PROXY_HOST` 一起时生效 | Socks代理密码 |
| `HTTPS_PROXY` | 可选 | HTTPS 代理,支持 httphttps, socks5 |
| `ALL_PROXY` | 可选 | 所有代理 代理,支持 httphttps, socks5 |

View File

@@ -2,39 +2,33 @@ version: '3'
services:
app:
container_name: chatgpt-web
image: chenzhaoyu94/chatgpt-web # 总是使用latest,更新时重新pull该tag镜像即可
ports:
- 3002:3002
environment:
# 二选一
OPENAI_API_KEY:
OPENAI_API_KEY: sk-xxx
# 二选一
OPENAI_ACCESS_TOKEN:
OPENAI_ACCESS_TOKEN: xxx
# API接口地址可选设置 OPENAI_API_KEY 时可用
OPENAI_API_BASE_URL:
OPENAI_API_BASE_URL: xxx
# API模型可选设置 OPENAI_API_KEY 时可用
OPENAI_API_MODEL:
OPENAI_API_MODEL: xxx
# 反向代理,可选
API_REVERSE_PROXY:
API_REVERSE_PROXY: xxx
# 访问权限密钥,可选
AUTH_SECRET_KEY:
AUTH_SECRET_KEY: xxx
# 每小时最大请求次数,可选,默认无限
MAX_REQUEST_PER_HOUR: 0
# 超时,单位毫秒,可选
TIMEOUT_MS: 60000
# Socks代理可选和 SOCKS_PROXY_PORT 一起时生效
SOCKS_PROXY_HOST:
SOCKS_PROXY_HOST: xxx
# Socks代理端口可选和 SOCKS_PROXY_HOST 一起时生效
SOCKS_PROXY_PORT:
# Socks代理用户名可选和 SOCKS_PROXY_HOST & SOCKS_PROXY_PORT 一起时生效
SOCKS_PROXY_USERNAME:
# Socks代理密码可选和 SOCKS_PROXY_HOST & SOCKS_PROXY_PORT 一起时生效
SOCKS_PROXY_PASSWORD:
SOCKS_PROXY_PORT: xxx
# HTTPS_PROXY 代理,可选
HTTPS_PROXY:
HTTPS_PROXY: http://xxx:7890
nginx:
container_name: nginx
image: nginx:alpine
ports:
- '80:80'

View File

@@ -1,9 +0,0 @@
## 增加一个Kubernetes的部署方式
```
kubectl apply -f deploy.yaml
```
### 如果需要Ingress域名接入
```
kubectl apply -f ingress.yaml
```

View File

@@ -1,66 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatgpt-web
labels:
app: chatgpt-web
spec:
replicas: 1
selector:
matchLabels:
app: chatgpt-web
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: chatgpt-web
spec:
containers:
- image: chenzhaoyu94/chatgpt-web
name: chatgpt-web
imagePullPolicy: Always
ports:
- containerPort: 3002
env:
- name: OPENAI_API_KEY
value: sk-xxx
- name: OPENAI_API_BASE_URL
value: 'https://api.openai.com'
- name: OPENAI_API_MODEL
value: gpt-3.5-turbo
- name: API_REVERSE_PROXY
value: https://bypass.churchless.tech/api/conversation
- name: AUTH_SECRET_KEY
value: '123456'
- name: TIMEOUT_MS
value: '60000'
- name: SOCKS_PROXY_HOST
value: ''
- name: SOCKS_PROXY_PORT
value: ''
- name: HTTPS_PROXY
value: ''
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 300m
memory: 300Mi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: chatgpt-web
name: chatgpt-web
spec:
ports:
- name: chatgpt-web
port: 3002
protocol: TCP
targetPort: 3002
selector:
app: chatgpt-web
type: ClusterIP

View File

@@ -1,21 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: '5'
name: chatgpt-web
spec:
rules:
- host: chatgpt.example.com
http:
paths:
- backend:
service:
name: chatgpt-web
port:
number: 3002
path: /
pathType: ImplementationSpecific
tls:
- secretName: chatgpt-web-tls

View File

@@ -1,6 +1,6 @@
{
"name": "chatgpt-web",
"version": "2.10.9",
"version": "2.10.8",
"private": false,
"description": "ChatGPT Web",
"author": "ChenZhaoYu <chenzhaoyu1994@gmail.com>",

2331
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,12 +10,7 @@ OPENAI_API_BASE_URL=
# OpenAI API Model - https://platform.openai.com/docs/models
OPENAI_API_MODEL=
# set `true` to disable OpenAI API debug log
OPENAI_API_DISABLE_DEBUG=
# Reverse Proxy - Available on accessToken
# Default: https://bypass.churchless.tech/api/conversation
# More: https://github.com/transitive-bullshit/chatgpt-api#reverse-proxy
# Reverse Proxy
API_REVERSE_PROXY=
# timeout
@@ -33,12 +28,6 @@ SOCKS_PROXY_HOST=
# Socks Proxy Port
SOCKS_PROXY_PORT=
# Socks Proxy Username
SOCKS_PROXY_USERNAME=
# Socks Proxy Password
SOCKS_PROXY_PASSWORD=
# HTTPS PROXY
HTTPS_PROXY=

View File

@@ -25,7 +25,7 @@
},
"dependencies": {
"axios": "^1.3.4",
"chatgpt": "^5.2.2",
"chatgpt": "^5.1.2",
"dotenv": "^16.0.3",
"esno": "^0.16.3",
"express": "^4.18.2",

1231
service/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -25,7 +25,6 @@ const ErrorCodeMessage: Record<string, string> = {
}
const timeoutMs: number = !isNaN(+process.env.TIMEOUT_MS) ? +process.env.TIMEOUT_MS : 30 * 1000
const disableDebug: boolean = process.env.OPENAI_API_DISABLE_DEBUG === 'true'
let apiModel: ApiModel
@@ -45,7 +44,7 @@ let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI
const options: ChatGPTAPIOptions = {
apiKey: process.env.OPENAI_API_KEY,
completionParams: { model },
debug: !disableDebug,
debug: true,
}
// increase max token limit if use gpt-4
@@ -73,15 +72,13 @@ let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI
const OPENAI_API_MODEL = process.env.OPENAI_API_MODEL
const options: ChatGPTUnofficialProxyAPIOptions = {
accessToken: process.env.OPENAI_ACCESS_TOKEN,
debug: !disableDebug,
debug: true,
}
if (isNotEmptyString(OPENAI_API_MODEL))
options.model = OPENAI_API_MODEL
options.apiReverseProxyUrl = isNotEmptyString(process.env.API_REVERSE_PROXY)
? process.env.API_REVERSE_PROXY
: 'https://bypass.churchless.tech/api/conversation'
if (isNotEmptyString(process.env.API_REVERSE_PROXY))
options.apiReverseProxyUrl = process.env.API_REVERSE_PROXY
setupProxy(options)
@@ -161,19 +158,17 @@ async function chatConfig() {
}
function setupProxy(options: ChatGPTAPIOptions | ChatGPTUnofficialProxyAPIOptions) {
if (isNotEmptyString(process.env.SOCKS_PROXY_HOST) && isNotEmptyString(process.env.SOCKS_PROXY_PORT)) {
if (process.env.SOCKS_PROXY_HOST && process.env.SOCKS_PROXY_PORT) {
const agent = new SocksProxyAgent({
hostname: process.env.SOCKS_PROXY_HOST,
port: process.env.SOCKS_PROXY_PORT,
userId: isNotEmptyString(process.env.SOCKS_PROXY_USERNAME) ? process.env.SOCKS_PROXY_USERNAME : undefined,
password: isNotEmptyString(process.env.SOCKS_PROXY_PASSWORD) ? process.env.SOCKS_PROXY_PASSWORD : undefined,
})
options.fetch = (url, options) => {
return fetch(url, { agent, ...options })
}
}
else {
if (isNotEmptyString(process.env.HTTPS_PROXY) || isNotEmptyString(process.env.ALL_PROXY)) {
if (process.env.HTTPS_PROXY || process.env.ALL_PROXY) {
const httpsProxy = process.env.HTTPS_PROXY || process.env.ALL_PROXY
if (httpsProxy) {
const agent = new HttpsProxyAgent(httpsProxy)

View File

@@ -25,12 +25,21 @@ router.post('/chat-process', [auth, limiter], async (req, res) => {
try {
const { prompt, options = {}, systemMessage } = req.body as RequestProps
let firstChunk = true
let chatLength = 0
let newChatLength = 0
await chatReplyProcess({
message: prompt,
lastContext: options,
process: (chat: ChatMessage) => {
res.write(firstChunk ? JSON.stringify(chat) : `\n${JSON.stringify(chat)}`)
firstChunk = false
if (firstChunk) {
res.write(`${JSON.stringify(chat)}t1h1i4s5i1s4a1s9i1l9l8y1s0plit`)
firstChunk = false
}
else if (chatLength !== chat.text.length) {
newChatLength = chat.text.length
res.write(chat.text.substring(chatLength, newChatLength))
chatLength = newChatLength
}
},
systemMessage,
})
@@ -76,7 +85,7 @@ router.post('/verify', async (req, res) => {
res.send({ status: 'Success', message: 'Verify successfully', data: null })
}
catch (error) {
res.send({ status: 'Fail', message: error.message, data: null })
res.send({ status: 'Fail', message: error.messagen, data: null })
}
})

View File

@@ -29,7 +29,7 @@ function handleReset() {
<div class="flex items-center space-x-4">
<span class="flex-shrink-0 w-[100px]">{{ $t('setting.role') }}</span>
<div class="flex-1">
<NInput v-model:value="systemMessage" type="textarea" :autosize="{ minRows: 1, maxRows: 4 }" />
<NInput v-model:value="systemMessage" placeholder="" />
</div>
<NButton size="tiny" text type="primary" @click="updateSettings({ systemMessage })">
{{ $t('common.save') }}

View File

@@ -7,8 +7,9 @@ export interface SettingsState {
}
export function defaultSetting(): SettingsState {
const currentDate = new Date().toISOString().split('T')[0]
return {
systemMessage: 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using markdown.',
systemMessage: `You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible.\nKnowledge cutoff: 2021-09-01\nCurrent date: ${currentDate}`,
}
}

View File

@@ -123,10 +123,7 @@ html {
}
code.hljs {
padding: 3px 5px;
&::-webkit-scrollbar {
height: 4px;
}
padding: 3px 5px
}
.hljs {

View File

@@ -65,14 +65,18 @@ defineExpose({ textRef })
<template>
<div class="text-black" :class="wrapClass">
<div ref="textRef" class="leading-relaxed break-words">
<div v-if="!inversion" class="flex items-end">
<div v-if="!asRawText" class="w-full markdown-body" v-html="text" />
<div v-else class="w-full whitespace-pre-wrap" v-text="text" />
<span v-if="loading" class="dark:text-white w-[4px] h-[20px] block animate-blink" />
<template v-if="loading">
<span class="dark:text-white w-[4px] h-[20px] block animate-blink" />
</template>
<template v-else>
<div ref="textRef" class="leading-relaxed break-words">
<div v-if="!inversion">
<div v-if="!asRawText" class="markdown-body" v-html="text" />
<div v-else class="whitespace-pre-wrap" v-text="text" />
</div>
<div v-else class="whitespace-pre-wrap" v-text="text" />
</div>
<div v-else class="whitespace-pre-wrap" v-text="text" />
</div>
</template>
</div>
</template>

View File

@@ -107,7 +107,9 @@ async function onConversation() {
scrollToBottom()
try {
let lastText = ''
const magicSplit = 't1h1i4s5i1s4a1s9i1l9l8y1s0plit'
let renderText = ''
let firstTime = true
const fetchChatAPIOnce = async () => {
await fetchChatAPIProcess<Chat.ConversationResponse>({
prompt: message,
@@ -117,43 +119,49 @@ async function onConversation() {
const xhr = event.target
const { responseText } = xhr
// Always process the final line
const lastIndex = responseText.lastIndexOf('\n', responseText.length - 2)
let chunk = responseText
if (lastIndex !== -1)
chunk = responseText.substring(lastIndex)
try {
const data = JSON.parse(chunk)
updateChat(
+uuid,
dataSources.value.length - 1,
{
dateTime: new Date().toLocaleString(),
text: lastText + (data.text ?? ''),
inversion: false,
error: false,
loading: true,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
requestOptions: { prompt: message, options: { ...options } },
},
)
if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
options.parentMessageId = data.id
lastText = data.text
message = ''
return fetchChatAPIOnce()
const splitIndexBegin = responseText.search(magicSplit)
if (splitIndexBegin !== -1) {
const splitIndexEnd = splitIndexBegin + magicSplit.length
const firstChunk = responseText.substring(0, splitIndexBegin)
const deltaText = responseText.substring(splitIndexEnd)
try {
const data = JSON.parse(firstChunk)
if (firstTime) {
firstTime = false
renderText = data.text ?? ''
}
else {
renderText = deltaText ?? ''
}
updateChat(
+uuid,
dataSources.value.length - 1,
{
dateTime: new Date().toLocaleString(),
text: renderText,
inversion: false,
error: false,
loading: false,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
requestOptions: { prompt: message, ...options },
},
)
if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
options.parentMessageId = data.id
message = ''
return fetchChatAPIOnce()
}
}
catch (error) {
//
}
scrollToBottomIfAtBottom()
}
catch (error) {
//
}
},
})
updateChatSome(+uuid, dataSources.value.length - 1, { loading: false })
}
await fetchChatAPIOnce()
}
catch (error: any) {
@@ -238,7 +246,9 @@ async function onRegenerate(index: number) {
)
try {
let lastText = ''
const magicSplit = 't1h1i4s5i1s4a1s9i1l9l8y1s0plit'
let renderText = ''
let firstTime = true
const fetchChatAPIOnce = async () => {
await fetchChatAPIProcess<Chat.ConversationResponse>({
prompt: message,
@@ -248,39 +258,48 @@ async function onRegenerate(index: number) {
const xhr = event.target
const { responseText } = xhr
// Always process the final line
const lastIndex = responseText.lastIndexOf('\n', responseText.length - 2)
let chunk = responseText
if (lastIndex !== -1)
chunk = responseText.substring(lastIndex)
try {
const data = JSON.parse(chunk)
updateChat(
+uuid,
index,
{
dateTime: new Date().toLocaleString(),
text: lastText + (data.text ?? ''),
inversion: false,
error: false,
loading: true,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
requestOptions: { prompt: message, ...options },
},
)
if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
options.parentMessageId = data.id
lastText = data.text
message = ''
return fetchChatAPIOnce()
const splitIndexBegin = responseText.search(magicSplit)
if (splitIndexBegin !== -1) {
const splitIndexEnd = splitIndexBegin + magicSplit.length
const firstChunk = responseText.substring(0, splitIndexBegin)
const deltaText = responseText.substring(splitIndexEnd)
try {
const data = JSON.parse(firstChunk)
if (firstTime) {
firstTime = false
renderText = data.text ?? ''
}
else {
renderText = deltaText ?? ''
}
updateChat(
+uuid,
index,
{
dateTime: new Date().toLocaleString(),
text: renderText,
inversion: false,
error: false,
loading: false,
conversationOptions: { conversationId: data.conversationId, parentMessageId: data.id },
requestOptions: { prompt: message, ...options },
},
)
if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
options.parentMessageId = data.id
message = ''
return fetchChatAPIOnce()
}
}
catch (error) {
//
}
}
catch (error) {
//
}
},
})
updateChatSome(+uuid, index, { loading: false })
}
await fetchChatAPIOnce()
}
@@ -469,16 +488,13 @@ onUnmounted(() => {
<template>
<div class="flex flex-col w-full h-full">
<HeaderComponent
v-if="isMobile"
:using-context="usingContext"
@export="handleExport"
v-if="isMobile" :using-context="usingContext" @export="handleExport"
@toggle-using-context="toggleUsingContext"
/>
<main class="flex-1 overflow-hidden">
<div id="scrollRef" ref="scrollRef" class="h-full overflow-hidden overflow-y-auto">
<div
id="image-wrapper"
class="w-full max-w-screen-xl m-auto dark:bg-[#101014]"
id="image-wrapper" class="w-full max-w-screen-xl m-auto dark:bg-[#101014]"
:class="[isMobile ? 'p-2' : 'p-4']"
>
<template v-if="!dataSources.length">
@@ -490,14 +506,8 @@ onUnmounted(() => {
<template v-else>
<div>
<Message
v-for="(item, index) of dataSources"
:key="index"
:date-time="item.dateTime"
:text="item.text"
:inversion="item.inversion"
:error="item.error"
:loading="item.loading"
@regenerate="onRegenerate(index)"
v-for="(item, index) of dataSources" :key="index" :date-time="item.dateTime" :text="item.text"
:inversion="item.inversion" :error="item.error" :loading="item.loading" @regenerate="onRegenerate(index)"
@delete="handleDelete(index)"
/>
<div class="sticky bottom-0 left-0 flex justify-center">
@@ -534,15 +544,9 @@ onUnmounted(() => {
<NAutoComplete v-model:value="prompt" :options="searchOptions" :render-label="renderOption">
<template #default="{ handleInput, handleBlur, handleFocus }">
<NInput
ref="inputRef"
v-model:value="prompt"
type="textarea"
:placeholder="placeholder"
:autosize="{ minRows: 1, maxRows: isMobile ? 4 : 8 }"
@input="handleInput"
@focus="handleFocus"
@blur="handleBlur"
@keypress="handleEnter"
ref="inputRef" v-model:value="prompt" type="textarea" :placeholder="placeholder"
:autosize="{ minRows: 1, maxRows: isMobile ? 4 : 8 }" @input="handleInput" @focus="handleFocus"
@blur="handleBlur" @keypress="handleEnter"
/>
</template>
</NAutoComplete>