Compare commits
5 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
086bfb3ce9 | ||
|
|
cd58d80be9 | ||
|
|
00b3aa9bb8 | ||
|
|
8822742664 | ||
|
|
2ea1d1d02a |
@@ -2,11 +2,11 @@ FROM alpine:3.15.3
|
|||||||
|
|
||||||
LABEL maintainer="cookeem"
|
LABEL maintainer="cookeem"
|
||||||
LABEL email="cookeem@qq.com"
|
LABEL email="cookeem@qq.com"
|
||||||
LABEL version="v1.0.1"
|
LABEL version="v1.0.2"
|
||||||
|
|
||||||
RUN adduser -h /chatgpt-service -u 1000 -D dory
|
RUN adduser -h /chatgpt-service -u 1000 -D dory
|
||||||
COPY chatgpt-service /chatgpt-service/
|
COPY chatgpt-service /chatgpt-service/
|
||||||
WORKDIR /chatgpt-service
|
WORKDIR /chatgpt-service
|
||||||
USER dory
|
USER dory
|
||||||
|
|
||||||
# docker build -t doryengine/chatgpt-service:v1.0.1-alpine .
|
# docker build -t doryengine/chatgpt-service:v1.0.2-alpine .
|
||||||
|
|||||||
55
README.md
55
README.md
@@ -1,43 +1,46 @@
|
|||||||
# 实时ChatGPT服务,基于最新的gpt-3.5-turbo-0301模型
|
# Real-time ChatGPT service, based on the latest gpt-3.5-turbo-0301 model
|
||||||
|
|
||||||
## chatGPT-service和chatGPT-stream
|
- [English README](README.md)
|
||||||
|
- [中文 README](README_CN.md)
|
||||||
|
|
||||||
- chatGPT-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
|
## About chatgpt-service and chatgpt-stream
|
||||||
- chatGPT-service是一个后端服务,用于实时接收chatGPT的消息,并通过websocket的方式实时反馈给chatGPT-stream
|
|
||||||
- chatGPT-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
|
|
||||||
- chatGPT-stream是一个前端服务,以websocket的方式实时接收chatGPT-service返回的消息
|
|
||||||
|
|
||||||
## gitee传送门
|
- chatgpt-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
|
||||||
|
- chatgpt-service is a backend service, used to receive chatGPT messages in real time, and feed back to chatGPT-stream in real time through websocket
|
||||||
|
- chatgpt-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
|
||||||
|
- chatgpt-stream is a front-end service that receives messages returned by chatGPT-service in real time through websocket
|
||||||
|
|
||||||
|
## gitee
|
||||||
|
|
||||||
- [https://gitee.com/cookeem/chatgpt-service](https://gitee.com/cookeem/chatgpt-service)
|
- [https://gitee.com/cookeem/chatgpt-service](https://gitee.com/cookeem/chatgpt-service)
|
||||||
- [https://gitee.com/cookeem/chatgpt-stream](https://gitee.com/cookeem/chatgpt-stream)
|
- [https://gitee.com/cookeem/chatgpt-stream](https://gitee.com/cookeem/chatgpt-stream)
|
||||||
|
|
||||||
## 效果图
|
## Demo
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
## 快速开始
|
## Quick start
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 拉取代码
|
# Pull source code
|
||||||
git clone https://github.com/cookeem/chatgpt-service.git
|
git clone https://github.com/cookeem/chatgpt-service.git
|
||||||
cd chatgpt-service
|
cd chatgpt-service
|
||||||
|
|
||||||
# chatGPT的注册页面: https://beta.openai.com/signup
|
# ChatGPT's registration page: https://beta.openai.com/signup
|
||||||
# chatGPT的注册教程: https://www.cnblogs.com/damugua/p/16969508.html
|
# ChatGPT registration tutorial: https://www.cnblogs.com/damugua/p/16969508.html
|
||||||
# chatGPT的APIkey管理界面: https://beta.openai.com/account/api-keys
|
# ChatGPT API key management page: https://beta.openai.com/account/api-keys
|
||||||
|
|
||||||
# 修改config.yaml配置文件,修改appKey,改为你的openai.com的appKey
|
# Modify the config.yaml configuration file, modify the apiKey, and change it to your openai.com API key
|
||||||
vi config.yaml
|
vi config.yaml
|
||||||
# openai的appKey,改为你的apiKey
|
# your openai.com API key
|
||||||
appKey: "xxxxxx"
|
apiKey: "xxxxxx"
|
||||||
|
|
||||||
|
|
||||||
# 使用docker-compose启动服务
|
# Start the service with docker-compose
|
||||||
docker-compose up -d
|
docker-compose up -d
|
||||||
|
|
||||||
# 查看服务状态
|
# Check service status
|
||||||
docker-compose ps
|
docker-compose ps
|
||||||
Name Command State Ports
|
Name Command State Ports
|
||||||
-----------------------------------------------------------------------------------------------
|
-----------------------------------------------------------------------------------------------
|
||||||
@@ -45,28 +48,28 @@ chatgpt-service /chatgpt-service/chatgpt-s ... Up 0.0.0.0:59142->9000/t
|
|||||||
chatgpt-stream /docker-entrypoint.sh ngin ... Up 0.0.0.0:3000->80/tcp,:::3000->80/tcp
|
chatgpt-stream /docker-entrypoint.sh ngin ... Up 0.0.0.0:3000->80/tcp,:::3000->80/tcp
|
||||||
|
|
||||||
|
|
||||||
# 访问页面,请保证你的服务器可以访问chatGPT的api接口
|
# To access the page, please ensure that your server can access the chatGPT API
|
||||||
# http://localhost:3000
|
# http://localhost:3000
|
||||||
```
|
```
|
||||||
|
|
||||||
## 如何编译
|
## How to build
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 拉取构建依赖
|
# Pull build dependencies
|
||||||
go mod tidy
|
go mod tidy
|
||||||
# 项目编译
|
# Compile the project
|
||||||
go build
|
go build
|
||||||
|
|
||||||
# 执行程序
|
# Run the service
|
||||||
./chatgpt-service
|
./chatgpt-service
|
||||||
|
|
||||||
# 相关接口
|
# API url
|
||||||
# ws://localhost:9000/api/ws/chat
|
# ws://localhost:9000/api/ws/chat
|
||||||
|
|
||||||
# 安装wscat
|
# Install wscat
|
||||||
npm install -g wscat
|
npm install -g wscat
|
||||||
|
|
||||||
# 使用wscat测试websocket,然后输入你要查询的问题
|
# Use wscat to test websocket, then enter the question you want to query
|
||||||
wscat --connect ws://localhost:9000/api/ws/chat
|
wscat --connect ws://localhost:9000/api/ws/chat
|
||||||
|
|
||||||
```
|
```
|
||||||
75
README_CN.md
Normal file
75
README_CN.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
# 实时ChatGPT服务,基于最新的gpt-3.5-turbo-0301模型
|
||||||
|
|
||||||
|
- [English README](README.md)
|
||||||
|
- [中文 README](README_CN.md)
|
||||||
|
|
||||||
|
## chatGPT-service和chatGPT-stream
|
||||||
|
|
||||||
|
- chatGPT-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
|
||||||
|
- chatGPT-service是一个后端服务,用于实时接收chatGPT的消息,并通过websocket的方式实时反馈给chatGPT-stream
|
||||||
|
- chatGPT-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
|
||||||
|
- chatGPT-stream是一个前端服务,以websocket的方式实时接收chatGPT-service返回的消息
|
||||||
|
|
||||||
|
## gitee传送门
|
||||||
|
|
||||||
|
- [https://gitee.com/cookeem/chatgpt-service](https://gitee.com/cookeem/chatgpt-service)
|
||||||
|
- [https://gitee.com/cookeem/chatgpt-stream](https://gitee.com/cookeem/chatgpt-stream)
|
||||||
|
|
||||||
|
## 效果图
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## 快速开始
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 拉取代码
|
||||||
|
git clone https://github.com/cookeem/chatgpt-service.git
|
||||||
|
cd chatgpt-service
|
||||||
|
|
||||||
|
# chatGPT的注册页面: https://beta.openai.com/signup
|
||||||
|
# chatGPT的注册教程: https://www.cnblogs.com/damugua/p/16969508.html
|
||||||
|
# chatGPT的APIkey管理界面: https://beta.openai.com/account/api-keys
|
||||||
|
|
||||||
|
# 修改config.yaml配置文件,修改apiKey,改为你的openai.com的apiKey
|
||||||
|
vi config.yaml
|
||||||
|
# openai的apiKey,改为你的apiKey
|
||||||
|
apiKey: "xxxxxx"
|
||||||
|
|
||||||
|
|
||||||
|
# 使用docker-compose启动服务
|
||||||
|
docker-compose up -d
|
||||||
|
|
||||||
|
# 查看服务状态
|
||||||
|
docker-compose ps
|
||||||
|
Name Command State Ports
|
||||||
|
-----------------------------------------------------------------------------------------------
|
||||||
|
chatgpt-service /chatgpt-service/chatgpt-s ... Up 0.0.0.0:59142->9000/tcp
|
||||||
|
chatgpt-stream /docker-entrypoint.sh ngin ... Up 0.0.0.0:3000->80/tcp,:::3000->80/tcp
|
||||||
|
|
||||||
|
|
||||||
|
# 访问页面,请保证你的服务器可以访问chatGPT的api接口
|
||||||
|
# http://localhost:3000
|
||||||
|
```
|
||||||
|
|
||||||
|
## 如何编译
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 拉取构建依赖
|
||||||
|
go mod tidy
|
||||||
|
# 项目编译
|
||||||
|
go build
|
||||||
|
|
||||||
|
# 执行程序
|
||||||
|
./chatgpt-service
|
||||||
|
|
||||||
|
# 相关接口
|
||||||
|
# ws://localhost:9000/api/ws/chat
|
||||||
|
|
||||||
|
# 安装wscat
|
||||||
|
npm install -g wscat
|
||||||
|
|
||||||
|
# 使用wscat测试websocket,然后输入你要查询的问题
|
||||||
|
wscat --connect ws://localhost:9000/api/ws/chat
|
||||||
|
|
||||||
|
```
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
package chat
|
package chat
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/sashabaranov/go-openai"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
"os"
|
"os"
|
||||||
"time"
|
"time"
|
||||||
@@ -41,6 +42,29 @@ func (logger Logger) LogPanic(args ...interface{}) {
|
|||||||
const (
|
const (
|
||||||
StatusFail string = "FAIL"
|
StatusFail string = "FAIL"
|
||||||
|
|
||||||
pingPeriod = time.Second * 50
|
PingPeriod = time.Second * 50
|
||||||
pingWait = time.Second * 60
|
PingWait = time.Second * 60
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
GPTModels = []string{
|
||||||
|
openai.GPT432K0314,
|
||||||
|
openai.GPT432K,
|
||||||
|
openai.GPT40314,
|
||||||
|
openai.GPT4,
|
||||||
|
openai.GPT3Dot5Turbo0301,
|
||||||
|
openai.GPT3Dot5Turbo,
|
||||||
|
openai.GPT3TextDavinci003,
|
||||||
|
openai.GPT3TextDavinci002,
|
||||||
|
openai.GPT3TextCurie001,
|
||||||
|
openai.GPT3TextBabbage001,
|
||||||
|
openai.GPT3TextAda001,
|
||||||
|
openai.GPT3TextDavinci001,
|
||||||
|
openai.GPT3DavinciInstructBeta,
|
||||||
|
openai.GPT3Davinci,
|
||||||
|
openai.GPT3CurieInstructBeta,
|
||||||
|
openai.GPT3Curie,
|
||||||
|
openai.GPT3Ada,
|
||||||
|
openai.GPT3Babbage,
|
||||||
|
}
|
||||||
)
|
)
|
||||||
|
|||||||
274
chat/service.go
274
chat/service.go
@@ -2,7 +2,9 @@ package chat
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
@@ -11,7 +13,7 @@ import (
|
|||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
"github.com/gorilla/websocket"
|
"github.com/gorilla/websocket"
|
||||||
gogpt "github.com/sashabaranov/go-gpt3"
|
openai "github.com/sashabaranov/go-openai"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Api struct {
|
type Api struct {
|
||||||
@@ -46,7 +48,7 @@ func (api *Api) responseFunc(c *gin.Context, startTime time.Time, status, msg st
|
|||||||
|
|
||||||
func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int) {
|
func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int) {
|
||||||
var err error
|
var err error
|
||||||
ticker := time.NewTicker(pingPeriod)
|
ticker := time.NewTicker(PingPeriod)
|
||||||
|
|
||||||
var mutex = &sync.Mutex{}
|
var mutex = &sync.Mutex{}
|
||||||
|
|
||||||
@@ -57,7 +59,7 @@ func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int)
|
|||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
conn.SetWriteDeadline(time.Now().Add(pingWait))
|
conn.SetWriteDeadline(time.Now().Add(PingWait))
|
||||||
mutex.Lock()
|
mutex.Lock()
|
||||||
err = conn.WriteMessage(websocket.PingMessage, nil)
|
err = conn.WriteMessage(websocket.PingMessage, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -72,94 +74,192 @@ func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (api *Api) GetChatMessage(conn *websocket.Conn, cli *gogpt.Client, mutex *sync.Mutex, requestMsg string) {
|
func (api *Api) GetChatMessage(conn *websocket.Conn, cli *openai.Client, mutex *sync.Mutex, requestMsg string) {
|
||||||
var err error
|
var err error
|
||||||
var strResp string
|
var strResp string
|
||||||
req := gogpt.ChatCompletionRequest{
|
|
||||||
Model: gogpt.GPT3Dot5Turbo0301,
|
|
||||||
MaxTokens: api.Config.MaxLength,
|
|
||||||
Temperature: 1.0,
|
|
||||||
Messages: []gogpt.ChatCompletionMessage{
|
|
||||||
{
|
|
||||||
Role: "user",
|
|
||||||
Content: requestMsg,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Stream: true,
|
|
||||||
TopP: 1,
|
|
||||||
FrequencyPenalty: 0.1,
|
|
||||||
PresencePenalty: 0.1,
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
|
|
||||||
stream, err := cli.CreateChatCompletionStream(ctx, req)
|
switch api.Config.Model {
|
||||||
if err != nil {
|
case openai.GPT3Dot5Turbo0301, openai.GPT3Dot5Turbo, openai.GPT4, openai.GPT40314, openai.GPT432K0314, openai.GPT432K:
|
||||||
err = fmt.Errorf("[ERROR] create chatGPT stream error: %s", err.Error())
|
req := openai.ChatCompletionRequest{
|
||||||
chatMsg := Message{
|
Model: api.Config.Model,
|
||||||
Kind: "error",
|
MaxTokens: api.Config.MaxLength,
|
||||||
Msg: err.Error(),
|
Temperature: 1.0,
|
||||||
MsgId: uuid.New().String(),
|
Messages: []openai.ChatCompletionMessage{
|
||||||
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
{
|
||||||
|
Role: openai.ChatMessageRoleUser,
|
||||||
|
Content: requestMsg,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Stream: true,
|
||||||
|
TopP: 1,
|
||||||
|
FrequencyPenalty: 0.1,
|
||||||
|
PresencePenalty: 0.1,
|
||||||
}
|
}
|
||||||
mutex.Lock()
|
|
||||||
_ = conn.WriteJSON(chatMsg)
|
stream, err := cli.CreateChatCompletionStream(ctx, req)
|
||||||
mutex.Unlock()
|
if err != nil {
|
||||||
|
err = fmt.Errorf("[ERROR] create chatGPT stream model=%s error: %s", api.Config.Model, err.Error())
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: "error",
|
||||||
|
Msg: err.Error(),
|
||||||
|
MsgId: uuid.New().String(),
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
api.Logger.LogError(err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer stream.Close()
|
||||||
|
|
||||||
|
id := uuid.New().String()
|
||||||
|
var i int
|
||||||
|
for {
|
||||||
|
response, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
var s string
|
||||||
|
var kind string
|
||||||
|
if errors.Is(err, io.EOF) {
|
||||||
|
if i == 0 {
|
||||||
|
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
|
||||||
|
kind = "retry"
|
||||||
|
} else {
|
||||||
|
s = "\n\n###### [END] ######"
|
||||||
|
kind = "chat"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
s = fmt.Sprintf("[ERROR] %s", err.Error())
|
||||||
|
kind = "error"
|
||||||
|
}
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: kind,
|
||||||
|
Msg: s,
|
||||||
|
MsgId: id,
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(response.Choices) > 0 {
|
||||||
|
var s string
|
||||||
|
if i == 0 {
|
||||||
|
s = fmt.Sprintf(`%s# %s`, s, requestMsg)
|
||||||
|
}
|
||||||
|
for _, choice := range response.Choices {
|
||||||
|
s = s + choice.Delta.Content
|
||||||
|
}
|
||||||
|
strResp = strResp + s
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: "chat",
|
||||||
|
Msg: s,
|
||||||
|
MsgId: id,
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
}
|
||||||
|
i = i + 1
|
||||||
|
}
|
||||||
|
if strResp != "" {
|
||||||
|
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
|
||||||
|
}
|
||||||
|
case openai.GPT3TextDavinci003, openai.GPT3TextDavinci002, openai.GPT3TextCurie001, openai.GPT3TextBabbage001, openai.GPT3TextAda001, openai.GPT3TextDavinci001, openai.GPT3DavinciInstructBeta, openai.GPT3Davinci, openai.GPT3CurieInstructBeta, openai.GPT3Curie, openai.GPT3Ada, openai.GPT3Babbage:
|
||||||
|
req := openai.CompletionRequest{
|
||||||
|
Model: api.Config.Model,
|
||||||
|
MaxTokens: api.Config.MaxLength,
|
||||||
|
Temperature: 0.6,
|
||||||
|
Prompt: requestMsg,
|
||||||
|
Stream: true,
|
||||||
|
//Stop: []string{"\n\n\n"},
|
||||||
|
TopP: 1,
|
||||||
|
FrequencyPenalty: 0.1,
|
||||||
|
PresencePenalty: 0.1,
|
||||||
|
}
|
||||||
|
|
||||||
|
stream, err := cli.CreateCompletionStream(ctx, req)
|
||||||
|
if err != nil {
|
||||||
|
err = fmt.Errorf("[ERROR] create chatGPT stream model=%s error: %s", api.Config.Model, err.Error())
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: "error",
|
||||||
|
Msg: err.Error(),
|
||||||
|
MsgId: uuid.New().String(),
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
api.Logger.LogError(err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer stream.Close()
|
||||||
|
|
||||||
|
id := uuid.New().String()
|
||||||
|
var i int
|
||||||
|
for {
|
||||||
|
response, err := stream.Recv()
|
||||||
|
if err != nil {
|
||||||
|
var s string
|
||||||
|
var kind string
|
||||||
|
if errors.Is(err, io.EOF) {
|
||||||
|
if i == 0 {
|
||||||
|
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
|
||||||
|
kind = "retry"
|
||||||
|
} else {
|
||||||
|
s = "\n\n###### [END] ######"
|
||||||
|
kind = "chat"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
s = fmt.Sprintf("[ERROR] %s", err.Error())
|
||||||
|
kind = "error"
|
||||||
|
}
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: kind,
|
||||||
|
Msg: s,
|
||||||
|
MsgId: id,
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(response.Choices) > 0 {
|
||||||
|
var s string
|
||||||
|
if i == 0 {
|
||||||
|
s = fmt.Sprintf(`%s# %s`, s, requestMsg)
|
||||||
|
}
|
||||||
|
for _, choice := range response.Choices {
|
||||||
|
s = s + choice.Text
|
||||||
|
}
|
||||||
|
strResp = strResp + s
|
||||||
|
chatMsg := Message{
|
||||||
|
Kind: "chat",
|
||||||
|
Msg: s,
|
||||||
|
MsgId: id,
|
||||||
|
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
||||||
|
}
|
||||||
|
mutex.Lock()
|
||||||
|
_ = conn.WriteJSON(chatMsg)
|
||||||
|
mutex.Unlock()
|
||||||
|
}
|
||||||
|
i = i + 1
|
||||||
|
}
|
||||||
|
if strResp != "" {
|
||||||
|
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
err = fmt.Errorf("model not exists")
|
||||||
api.Logger.LogError(err.Error())
|
api.Logger.LogError(err.Error())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
defer stream.Close()
|
|
||||||
|
|
||||||
id := uuid.New().String()
|
|
||||||
var i int
|
|
||||||
for {
|
|
||||||
response, err := stream.Recv()
|
|
||||||
if err != nil {
|
|
||||||
var s string
|
|
||||||
var kind string
|
|
||||||
if i == 0 {
|
|
||||||
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
|
|
||||||
kind = "retry"
|
|
||||||
} else {
|
|
||||||
s = "\n\n###### [END] ######"
|
|
||||||
kind = "chat"
|
|
||||||
}
|
|
||||||
chatMsg := Message{
|
|
||||||
Kind: kind,
|
|
||||||
Msg: s,
|
|
||||||
MsgId: id,
|
|
||||||
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
|
||||||
}
|
|
||||||
mutex.Lock()
|
|
||||||
_ = conn.WriteJSON(chatMsg)
|
|
||||||
mutex.Unlock()
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(response.Choices) > 0 {
|
|
||||||
var s string
|
|
||||||
if i == 0 {
|
|
||||||
s = fmt.Sprintf(`%s# %s`, s, requestMsg)
|
|
||||||
}
|
|
||||||
for _, choice := range response.Choices {
|
|
||||||
s = s + choice.Delta.Content
|
|
||||||
}
|
|
||||||
strResp = strResp + s
|
|
||||||
chatMsg := Message{
|
|
||||||
Kind: "chat",
|
|
||||||
Msg: s,
|
|
||||||
MsgId: id,
|
|
||||||
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
|
|
||||||
}
|
|
||||||
mutex.Lock()
|
|
||||||
_ = conn.WriteJSON(chatMsg)
|
|
||||||
mutex.Unlock()
|
|
||||||
}
|
|
||||||
i = i + 1
|
|
||||||
}
|
|
||||||
if strResp != "" {
|
|
||||||
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (api *Api) WsChat(c *gin.Context) {
|
func (api *Api) WsChat(c *gin.Context) {
|
||||||
@@ -188,9 +288,9 @@ func (api *Api) WsChat(c *gin.Context) {
|
|||||||
_ = conn.Close()
|
_ = conn.Close()
|
||||||
}()
|
}()
|
||||||
|
|
||||||
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
|
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
|
||||||
conn.SetPongHandler(func(s string) error {
|
conn.SetPongHandler(func(s string) error {
|
||||||
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
|
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
||||||
@@ -212,7 +312,7 @@ func (api *Api) WsChat(c *gin.Context) {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
api.Logger.LogInfo(fmt.Sprintf("websocket connection open"))
|
api.Logger.LogInfo(fmt.Sprintf("websocket connection open"))
|
||||||
cli := gogpt.NewClient(api.Config.AppKey)
|
cli := openai.NewClient(api.Config.ApiKey)
|
||||||
|
|
||||||
var latestRequestTime time.Time
|
var latestRequestTime time.Time
|
||||||
for {
|
for {
|
||||||
@@ -282,10 +382,10 @@ func (api *Api) WsChat(c *gin.Context) {
|
|||||||
isClosed = true
|
isClosed = true
|
||||||
api.Logger.LogInfo("[CLOSED] websocket receive closed message")
|
api.Logger.LogInfo("[CLOSED] websocket receive closed message")
|
||||||
case websocket.PingMessage:
|
case websocket.PingMessage:
|
||||||
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
|
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
|
||||||
api.Logger.LogInfo("[PING] websocket receive ping message")
|
api.Logger.LogInfo("[PING] websocket receive ping message")
|
||||||
case websocket.PongMessage:
|
case websocket.PongMessage:
|
||||||
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
|
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
|
||||||
api.Logger.LogInfo("[PONG] websocket receive pong message")
|
api.Logger.LogInfo("[PONG] websocket receive pong message")
|
||||||
default:
|
default:
|
||||||
err = fmt.Errorf("[ERROR] websocket receive message type not text")
|
err = fmt.Errorf("[ERROR] websocket receive message type not text")
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
package chat
|
package chat
|
||||||
|
|
||||||
type Config struct {
|
type Config struct {
|
||||||
AppKey string `yaml:"appKey" json:"appKey" bson:"appKey" validate:"required"`
|
ApiKey string `yaml:"apiKey" json:"apiKey" bson:"apiKey" validate:"required"`
|
||||||
Port int `yaml:"port" json:"port" bson:"port" validate:"required"`
|
Port int `yaml:"port" json:"port" bson:"port" validate:"required"`
|
||||||
IntervalSeconds int `yaml:"intervalSeconds" json:"intervalSeconds" bson:"intervalSeconds" validate:"required"`
|
IntervalSeconds int `yaml:"intervalSeconds" json:"intervalSeconds" bson:"intervalSeconds" validate:"required"`
|
||||||
|
Model string `yaml:"model" json:"model" bson:"model" validate:"required"`
|
||||||
MaxLength int `yaml:"maxLength" json:"maxLength" bson:"maxLength" validate:"required"`
|
MaxLength int `yaml:"maxLength" json:"maxLength" bson:"maxLength" validate:"required"`
|
||||||
Cors bool `yaml:"cors" json:"cors" bson:"cors" validate:""`
|
Cors bool `yaml:"cors" json:"cors" bson:"cors" validate:""`
|
||||||
}
|
}
|
||||||
|
|||||||
15
config.yaml
15
config.yaml
@@ -1,11 +1,20 @@
|
|||||||
# openai的appKey
|
# Your openai.com API key
|
||||||
appKey: "xxxxxx"
|
# openai的API Key
|
||||||
|
apiKey: "xxxxxx"
|
||||||
|
# Service port
|
||||||
# 服务端口
|
# 服务端口
|
||||||
port: 9000
|
port: 9000
|
||||||
|
# The time interval for sending questions cannot be less than how long, unit: second
|
||||||
# 问题发送的时间间隔不能小于多长时间,单位:秒
|
# 问题发送的时间间隔不能小于多长时间,单位:秒
|
||||||
intervalSeconds: 5
|
intervalSeconds: 5
|
||||||
|
# GPT model, if you use the GPT4 model, please ensure that the corresponding openai account has the permission to use the GPT4 model
|
||||||
|
# Available models include: gpt-4-32k-0314, gpt-4-32k, gpt-4-0314, gpt-4, gpt-3.5-turbo-0301, gpt-3.5-turbo, text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, text-davinci-001, davinci-instruct-beta, davinci, curie-instruct-beta, curie, ada, babbage
|
||||||
|
# GPT模型,如果使用GPT4模型,请保证对应的openai账号有GPT4模型的使用权限
|
||||||
|
# 可用的模型包括: gpt-4-32k-0314, gpt-4-32k, gpt-4-0314, gpt-4, gpt-3.5-turbo-0301, gpt-3.5-turbo, text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, text-davinci-001, davinci-instruct-beta, davinci, curie-instruct-beta, curie, ada, babbage
|
||||||
|
model: gpt-3.5-turbo-0301
|
||||||
|
# The maximum length of the returned answer
|
||||||
# 返回答案的最大长度
|
# 返回答案的最大长度
|
||||||
maxLength: 2000
|
maxLength: 2000
|
||||||
|
# Whether to allow cors cross-domain
|
||||||
# 是否允许cors跨域
|
# 是否允许cors跨域
|
||||||
cors: true
|
cors: true
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
version: "3"
|
version: "3"
|
||||||
services:
|
services:
|
||||||
chatgpt-stream:
|
chatgpt-stream:
|
||||||
image: "doryengine/chatgpt-stream:v1.0.1"
|
image: "doryengine/chatgpt-stream:v1.0.2"
|
||||||
hostname: chatgpt-stream
|
hostname: chatgpt-stream
|
||||||
container_name: chatgpt-stream
|
container_name: chatgpt-stream
|
||||||
ports:
|
ports:
|
||||||
@@ -11,7 +11,7 @@ services:
|
|||||||
- chatgpt-service
|
- chatgpt-service
|
||||||
restart: always
|
restart: always
|
||||||
chatgpt-service:
|
chatgpt-service:
|
||||||
image: "doryengine/chatgpt-service:v1.0.1-alpine"
|
image: "doryengine/chatgpt-service:v1.0.2-alpine"
|
||||||
hostname: chatgpt-service
|
hostname: chatgpt-service
|
||||||
container_name: chatgpt-service
|
container_name: chatgpt-service
|
||||||
ports:
|
ports:
|
||||||
|
|||||||
2
go.mod
2
go.mod
@@ -7,7 +7,7 @@ require (
|
|||||||
github.com/gin-gonic/gin v1.8.2
|
github.com/gin-gonic/gin v1.8.2
|
||||||
github.com/google/uuid v1.3.0
|
github.com/google/uuid v1.3.0
|
||||||
github.com/gorilla/websocket v1.5.0
|
github.com/gorilla/websocket v1.5.0
|
||||||
github.com/sashabaranov/go-gpt3 v1.3.3
|
github.com/sashabaranov/go-openai v1.5.4
|
||||||
github.com/sirupsen/logrus v1.9.0
|
github.com/sirupsen/logrus v1.9.0
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
)
|
)
|
||||||
|
|||||||
15
main.go
15
main.go
@@ -27,8 +27,19 @@ func main() {
|
|||||||
logger.LogError(err.Error())
|
logger.LogError(err.Error())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if config.AppKey == "" {
|
if config.ApiKey == "" {
|
||||||
logger.LogError(fmt.Sprintf("appKey is empty"))
|
logger.LogError(fmt.Sprintf("apiKey is empty"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var found bool
|
||||||
|
for _, model := range chat.GPTModels {
|
||||||
|
if model == config.Model {
|
||||||
|
found = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
logger.LogError(fmt.Sprintf("model not exists"))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user