14 Commits

Author SHA1 Message Date
cookeem
3af6192633 支持上下文对话方式追加问题 2023-03-27 12:00:57 +08:00
cookeem
5b75b51059 update README 2023-03-23 22:34:09 +08:00
cookeem
49b89d5aad update docker-compose 2023-03-23 22:26:38 +08:00
cookeem
87386c5061 update README 2023-03-23 22:16:30 +08:00
cookeem
e2368fc284 generate pictures through the picture description 2023-03-22 23:09:42 +08:00
cookeem
ff2410ebea v1.0.3 2023-03-22 23:05:24 +08:00
cookeem
3315e5940f support generate image by prompt 2023-03-22 23:04:05 +08:00
cookeem
38f7a73288 update README.md 2023-03-22 16:50:53 +08:00
cookeem
7151dac97d support GPT3/GPt4 2023-03-22 16:36:47 +08:00
cookeem
086bfb3ce9 支持GPT4和自定义GPT模型 2023-03-22 16:18:38 +08:00
cookeem
cd58d80be9 GPT3Dot5Turbo0301 2023-03-22 11:28:21 +08:00
cookeem
00b3aa9bb8 gpt4 2023-03-22 11:22:06 +08:00
cookeem
8822742664 English README 2023-03-06 15:08:11 +08:00
cookeem
2ea1d1d02a 更新错误提示 2023-03-05 08:59:22 +08:00
12 changed files with 485 additions and 125 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

View File

@@ -2,11 +2,11 @@ FROM alpine:3.15.3
LABEL maintainer="cookeem"
LABEL email="cookeem@qq.com"
LABEL version="v1.0.1"
LABEL version="v1.0.3"
RUN adduser -h /chatgpt-service -u 1000 -D dory
COPY chatgpt-service /chatgpt-service/
WORKDIR /chatgpt-service
USER dory
# docker build -t doryengine/chatgpt-service:v1.0.1-alpine .
# docker build -t doryengine/chatgpt-service:v1.0.3-alpine .

View File

@@ -1,43 +1,54 @@
# 实时ChatGPT服务基于最新的gpt-3.5-turbo-0301模型
# Real-time ChatGPT service, support GPT3/GPT4, support conversation and generate pictures from sentences
## chatGPT-service和chatGPT-stream
- [English README](README.md)
- [中文 README](README_CN.md)
- chatGPT-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
- chatGPT-service是一个后端服务用于实时接收chatGPT的消息并通过websocket的方式实时反馈给chatGPT-stream
- chatGPT-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
- chatGPT-stream是一个前端服务以websocket的方式实时接收chatGPT-service返回的消息
## About chatgpt-service and chatgpt-stream
## gitee传送门
- chatgpt-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
- chatgpt-service is a backend service, used to receive chatGPT messages in real time, and feed back to chatGPT-stream in real time through websocket
- chatgpt-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
- chatgpt-stream is a front-end service that receives messages returned by chatGPT-service in real time through websocket
## gitee
- [https://gitee.com/cookeem/chatgpt-service](https://gitee.com/cookeem/chatgpt-service)
- [https://gitee.com/cookeem/chatgpt-stream](https://gitee.com/cookeem/chatgpt-stream)
## 效果图
## Demo
- Real-time conversation mode
![](chatgpt-service.gif)
- Generate picture patterns from sentences
## 快速开始
![](chatgpt-image.jpeg)
## Quick start
```bash
# 拉取代码
# Pull source code
git clone https://github.com/cookeem/chatgpt-service.git
cd chatgpt-service
# chatGPT的注册页面: https://beta.openai.com/signup
# chatGPT的注册教程: https://www.cnblogs.com/damugua/p/16969508.html
# chatGPTAPIkey管理界面: https://beta.openai.com/account/api-keys
# ChatGPT's registration page: https://beta.openai.com/signup
# ChatGPT registration tutorial: https://www.cnblogs.com/damugua/p/16969508.html
# ChatGPT API key management page: https://beta.openai.com/account/api-keys
# 修改config.yaml配置文件修改appKey改为你的openai.com的appKey
# Modify the config.yaml configuration file, modify the apiKey, and change it to your openai.com API key
vi config.yaml
# openai的appKey改为你的apiKey
appKey: "xxxxxx"
# your openai.com API key
apiKey: "xxxxxx"
# create pictures directory
mkdir -p assets
chown -R 1000:1000 assets
# 使用docker-compose启动服务
# Start the service with docker-compose
docker-compose up -d
# 查看服务状态
# Check service status
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------
@@ -45,28 +56,31 @@ chatgpt-service /chatgpt-service/chatgpt-s ... Up 0.0.0.0:59142->9000/t
chatgpt-stream /docker-entrypoint.sh ngin ... Up 0.0.0.0:3000->80/tcp,:::3000->80/tcp
# 访问页面,请保证你的服务器可以访问chatGPT的api接口
# To access the page, please ensure that your server can access the chatGPT API
# http://localhost:3000
```
## 如何编译
- Enter the question directly, it will call the ChatGPT interface to return the answer
- Enter the picture description after `/image`, it will call the DALL-E2 interface to automatically generate pictures through the picture description
## How to build
```bash
# 拉取构建依赖
# Pull build dependencies
go mod tidy
# 项目编译
# Compile the project
go build
# 执行程序
# Run the service
./chatgpt-service
# 相关接口
# API url
# ws://localhost:9000/api/ws/chat
# 安装wscat
# Install wscat
npm install -g wscat
# 使用wscat测试websocket然后输入你要查询的问题
# Use wscat to test websocket, then enter the question you want to query
wscat --connect ws://localhost:9000/api/ws/chat
```

86
README_CN.md Normal file
View File

@@ -0,0 +1,86 @@
# 实时ChatGPT服务支持GPT3/GPT4支持对话和通过句子生成图片
- [English README](README.md)
- [中文 README](README_CN.md)
## chatGPT-service和chatGPT-stream
- chatGPT-service: [https://github.com/cookeem/chatgpt-service](https://github.com/cookeem/chatgpt-service)
- chatGPT-service是一个后端服务用于实时接收chatGPT的消息并通过websocket的方式实时反馈给chatGPT-stream
- chatGPT-stream: [https://github.com/cookeem/chatgpt-stream](https://github.com/cookeem/chatgpt-stream)
- chatGPT-stream是一个前端服务以websocket的方式实时接收chatGPT-service返回的消息
## gitee传送门
- [https://gitee.com/cookeem/chatgpt-service](https://gitee.com/cookeem/chatgpt-service)
- [https://gitee.com/cookeem/chatgpt-stream](https://gitee.com/cookeem/chatgpt-stream)
## 效果图
- 实时对话模式
![](chatgpt-service.gif)
- 通过句子生成图片模式
![](chatgpt-image.jpeg)
## 快速开始
```bash
# 拉取代码
git clone https://github.com/cookeem/chatgpt-service.git
cd chatgpt-service
# chatGPT的注册页面: https://beta.openai.com/signup
# chatGPT的注册教程: https://www.cnblogs.com/damugua/p/16969508.html
# chatGPT的APIkey管理界面: https://beta.openai.com/account/api-keys
# 修改config.yaml配置文件修改apiKey改为你的openai.com的apiKey
vi config.yaml
# openai的apiKey改为你的apiKey
apiKey: "xxxxxx"
# 创建生成的图片目录
mkdir -p assets
chown -R 1000:1000 assets
# 使用docker-compose启动服务
docker-compose up -d
# 查看服务状态
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------
chatgpt-service /chatgpt-service/chatgpt-s ... Up 0.0.0.0:59142->9000/tcp
chatgpt-stream /docker-entrypoint.sh ngin ... Up 0.0.0.0:3000->80/tcp,:::3000->80/tcp
# 访问页面请保证你的服务器可以访问chatGPT的api接口
# http://localhost:3000
```
- 直接输入问题则调用ChatGPT接口返回答案
- `/image `后边输入想要的图片描述则调用DALL-E2接口通过图片描述自动生成图片
## 如何编译
```bash
# 拉取构建依赖
go mod tidy
# 项目编译
go build
# 执行程序
./chatgpt-service
# 相关接口
# ws://localhost:9000/api/ws/chat
# 安装wscat
npm install -g wscat
# 使用wscat测试websocket然后输入你要查询的问题
wscat --connect ws://localhost:9000/api/ws/chat
```

View File

@@ -1,7 +1,10 @@
package chat
import (
"fmt"
"github.com/sashabaranov/go-openai"
log "github.com/sirupsen/logrus"
"math/rand"
"os"
"time"
)
@@ -38,9 +41,48 @@ func (logger Logger) LogPanic(args ...interface{}) {
log.Panic(args...)
}
func RandomString(n int) string {
var letter []rune
lowerChars := "abcdefghijklmnopqrstuvwxyz"
numberChars := "0123456789"
chars := fmt.Sprintf("%s%s", lowerChars, numberChars)
letter = []rune(chars)
var str string
b := make([]rune, n)
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
for i := range b {
b[i] = letter[seededRand.Intn(len(letter))]
}
str = string(b)
return str
}
const (
StatusFail string = "FAIL"
pingPeriod = time.Second * 50
pingWait = time.Second * 60
PingPeriod = time.Second * 50
PingWait = time.Second * 60
)
var (
GPTModels = []string{
openai.GPT432K0314,
openai.GPT432K,
openai.GPT40314,
openai.GPT4,
openai.GPT3Dot5Turbo0301,
openai.GPT3Dot5Turbo,
openai.GPT3TextDavinci003,
openai.GPT3TextDavinci002,
openai.GPT3TextCurie001,
openai.GPT3TextBabbage001,
openai.GPT3TextAda001,
openai.GPT3TextDavinci001,
openai.GPT3DavinciInstructBeta,
openai.GPT3Davinci,
openai.GPT3CurieInstructBeta,
openai.GPT3Curie,
openai.GPT3Ada,
openai.GPT3Babbage,
}
)

View File

@@ -2,8 +2,12 @@ package chat
import (
"context"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"os"
"strings"
"sync"
"time"
@@ -11,7 +15,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/google/uuid"
"github.com/gorilla/websocket"
gogpt "github.com/sashabaranov/go-gpt3"
openai "github.com/sashabaranov/go-openai"
)
type Api struct {
@@ -46,7 +50,7 @@ func (api *Api) responseFunc(c *gin.Context, startTime time.Time, status, msg st
func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int) {
var err error
ticker := time.NewTicker(pingPeriod)
ticker := time.NewTicker(PingPeriod)
var mutex = &sync.Mutex{}
@@ -57,7 +61,7 @@ func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int)
for {
select {
case <-ticker.C:
conn.SetWriteDeadline(time.Now().Add(pingWait))
conn.SetWriteDeadline(time.Now().Add(PingWait))
mutex.Lock()
err = conn.WriteMessage(websocket.PingMessage, nil)
if err != nil {
@@ -72,30 +76,206 @@ func (api *Api) wsPingMsg(conn *websocket.Conn, chClose, chIsCloseSet chan int)
}
}
func (api *Api) GetChatMessage(conn *websocket.Conn, cli *gogpt.Client, mutex *sync.Mutex, requestMsg string) {
func (api *Api) GetChatMessage(conn *websocket.Conn, cli *openai.Client, mutex *sync.Mutex, reqMsgs []openai.ChatCompletionMessage) {
var err error
var strResp string
req := gogpt.ChatCompletionRequest{
Model: gogpt.GPT3Dot5Turbo0301,
MaxTokens: api.Config.MaxLength,
Temperature: 1.0,
Messages: []gogpt.ChatCompletionMessage{
{
Role: "user",
Content: requestMsg,
},
},
Stream: true,
TopP: 1,
FrequencyPenalty: 0.1,
PresencePenalty: 0.1,
}
ctx := context.Background()
stream, err := cli.CreateChatCompletionStream(ctx, req)
if err != nil {
err = fmt.Errorf("[ERROR] create chatGPT stream error: %s", err.Error())
switch api.Config.Model {
case openai.GPT3Dot5Turbo0301, openai.GPT3Dot5Turbo, openai.GPT4, openai.GPT40314, openai.GPT432K0314, openai.GPT432K:
prompt := reqMsgs[len(reqMsgs)-1].Content
req := openai.ChatCompletionRequest{
Model: api.Config.Model,
MaxTokens: api.Config.MaxLength,
Temperature: 1.0,
Messages: reqMsgs,
Stream: true,
TopP: 1,
FrequencyPenalty: 0.1,
PresencePenalty: 0.1,
}
stream, err := cli.CreateChatCompletionStream(ctx, req)
if err != nil {
err = fmt.Errorf("[ERROR] create ChatGPT stream model=%s error: %s", api.Config.Model, err.Error())
chatMsg := Message{
Kind: "error",
Msg: err.Error(),
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
api.Logger.LogError(err.Error())
return
}
defer stream.Close()
id := uuid.New().String()
var i int
for {
response, err := stream.Recv()
if err != nil {
var s string
var kind string
if errors.Is(err, io.EOF) {
if i == 0 {
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
kind = "retry"
} else {
s = "\n\n###### [END] ######"
kind = "chat"
}
} else {
s = fmt.Sprintf("[ERROR] %s", err.Error())
kind = "error"
}
chatMsg := Message{
Kind: kind,
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
break
}
if len(response.Choices) > 0 {
var s string
if i == 0 {
s = fmt.Sprintf("%s# %s\n\n", s, prompt)
}
for _, choice := range response.Choices {
s = s + choice.Delta.Content
}
strResp = strResp + s
chatMsg := Message{
Kind: "chat",
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
}
i = i + 1
}
if strResp != "" {
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
}
case openai.GPT3TextDavinci003, openai.GPT3TextDavinci002, openai.GPT3TextCurie001, openai.GPT3TextBabbage001, openai.GPT3TextAda001, openai.GPT3TextDavinci001, openai.GPT3DavinciInstructBeta, openai.GPT3Davinci, openai.GPT3CurieInstructBeta, openai.GPT3Curie, openai.GPT3Ada, openai.GPT3Babbage:
prompt := reqMsgs[len(reqMsgs)-1].Content
req := openai.CompletionRequest{
Model: api.Config.Model,
MaxTokens: api.Config.MaxLength,
Temperature: 0.6,
Prompt: prompt,
Stream: true,
//Stop: []string{"\n\n\n"},
TopP: 1,
FrequencyPenalty: 0.1,
PresencePenalty: 0.1,
}
stream, err := cli.CreateCompletionStream(ctx, req)
if err != nil {
err = fmt.Errorf("[ERROR] create ChatGPT stream model=%s error: %s", api.Config.Model, err.Error())
chatMsg := Message{
Kind: "error",
Msg: err.Error(),
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
api.Logger.LogError(err.Error())
return
}
defer stream.Close()
id := uuid.New().String()
var i int
for {
response, err := stream.Recv()
if err != nil {
var s string
var kind string
if errors.Is(err, io.EOF) {
if i == 0 {
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
kind = "retry"
} else {
s = "\n\n###### [END] ######"
kind = "chat"
}
} else {
s = fmt.Sprintf("[ERROR] %s", err.Error())
kind = "error"
}
chatMsg := Message{
Kind: kind,
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
break
}
if len(response.Choices) > 0 {
var s string
if i == 0 {
s = fmt.Sprintf("%s# %s\n\n", s, prompt)
}
for _, choice := range response.Choices {
s = s + choice.Text
}
strResp = strResp + s
chatMsg := Message{
Kind: "chat",
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
}
i = i + 1
}
if strResp != "" {
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
}
default:
err = fmt.Errorf("model not exists")
api.Logger.LogError(err.Error())
return
}
}
func (api *Api) GetImageMessage(conn *websocket.Conn, cli *openai.Client, mutex *sync.Mutex, requestMsg string) {
var err error
ctx := context.Background()
prompt := strings.TrimPrefix(requestMsg, "/image ")
req := openai.ImageRequest{
Prompt: prompt,
Size: openai.CreateImageSize256x256,
ResponseFormat: openai.CreateImageResponseFormatB64JSON,
N: 1,
}
sendError := func(err error) {
err = fmt.Errorf("[ERROR] generate image error: %s", err.Error())
chatMsg := Message{
Kind: "error",
Msg: err.Error(),
@@ -106,60 +286,56 @@ func (api *Api) GetChatMessage(conn *websocket.Conn, cli *gogpt.Client, mutex *s
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
api.Logger.LogError(err.Error())
}
resp, err := cli.CreateImage(ctx, req)
if err != nil {
err = fmt.Errorf("[ERROR] generate image error: %s", err.Error())
sendError(err)
return
}
defer stream.Close()
id := uuid.New().String()
var i int
for {
response, err := stream.Recv()
if err != nil {
var s string
var kind string
if i == 0 {
s = "[ERROR] NO RESPONSE, PLEASE RETRY"
kind = "retry"
} else {
s = "\n\n###### [END] ######"
kind = "chat"
}
chatMsg := Message{
Kind: kind,
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
break
}
if len(response.Choices) > 0 {
var s string
if i == 0 {
s = fmt.Sprintf(`%s# %s`, s, requestMsg)
}
for _, choice := range response.Choices {
s = s + choice.Delta.Content
}
strResp = strResp + s
chatMsg := Message{
Kind: "chat",
Msg: s,
MsgId: id,
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
}
i = i + 1
if len(resp.Data) == 0 {
err = fmt.Errorf("[ERROR] generate image error: result is empty")
sendError(err)
return
}
if strResp != "" {
api.Logger.LogInfo(fmt.Sprintf("[RESPONSE] %s\n", strResp))
imgBytes, err := base64.StdEncoding.DecodeString(resp.Data[0].B64JSON)
if err != nil {
err = fmt.Errorf("[ERROR] image base64 decode error: %s", err.Error())
sendError(err)
return
}
date := time.Now().Format("2006-01-02")
imageDir := fmt.Sprintf("assets/images/%s", date)
err = os.MkdirAll(imageDir, 0700)
if err != nil {
err = fmt.Errorf("[ERROR] create image directory error: %s", err.Error())
sendError(err)
return
}
imageFileName := fmt.Sprintf("%s.png", RandomString(16))
err = os.WriteFile(fmt.Sprintf("%s/%s", imageDir, imageFileName), imgBytes, 0600)
if err != nil {
err = fmt.Errorf("[ERROR] write png image error: %s", err.Error())
sendError(err)
return
}
msg := fmt.Sprintf("api/%s/%s", imageDir, imageFileName)
chatMsg := Message{
Kind: "image",
Msg: msg,
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
api.Logger.LogInfo(fmt.Sprintf("[IMAGE] # %s\n%s", requestMsg, msg))
return
}
func (api *Api) WsChat(c *gin.Context) {
@@ -188,9 +364,9 @@ func (api *Api) WsChat(c *gin.Context) {
_ = conn.Close()
}()
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
conn.SetPongHandler(func(s string) error {
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
return nil
})
@@ -212,7 +388,9 @@ func (api *Api) WsChat(c *gin.Context) {
}()
api.Logger.LogInfo(fmt.Sprintf("websocket connection open"))
cli := gogpt.NewClient(api.Config.AppKey)
cli := openai.NewClient(api.Config.ApiKey)
reqMsgs := make([]openai.ChatCompletionMessage, 0)
var latestRequestTime time.Time
for {
@@ -266,26 +444,43 @@ func (api *Api) WsChat(c *gin.Context) {
mutex.Unlock()
api.Logger.LogError(err.Error())
} else {
chatMsg := Message{
Kind: "receive",
Msg: requestMsg,
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
if strings.HasPrefix(requestMsg, "/image ") {
chatMsg := Message{
Kind: "receive",
Msg: requestMsg,
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
go api.GetImageMessage(conn, cli, mutex, requestMsg)
} else {
chatMsg := Message{
Kind: "receive",
Msg: requestMsg,
MsgId: uuid.New().String(),
CreateTime: time.Now().Format("2006-01-02 15:04:05"),
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
reqMsgs = append(reqMsgs, openai.ChatCompletionMessage{
Role: openai.ChatMessageRoleUser,
Content: requestMsg,
})
go api.GetChatMessage(conn, cli, mutex, reqMsgs)
}
mutex.Lock()
_ = conn.WriteJSON(chatMsg)
mutex.Unlock()
go api.GetChatMessage(conn, cli, mutex, requestMsg)
}
}
case websocket.CloseMessage:
isClosed = true
api.Logger.LogInfo("[CLOSED] websocket receive closed message")
case websocket.PingMessage:
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
api.Logger.LogInfo("[PING] websocket receive ping message")
case websocket.PongMessage:
_ = conn.SetReadDeadline(time.Now().Add(pingWait))
_ = conn.SetReadDeadline(time.Now().Add(PingWait))
api.Logger.LogInfo("[PONG] websocket receive pong message")
default:
err = fmt.Errorf("[ERROR] websocket receive message type not text")

View File

@@ -1,9 +1,10 @@
package chat
type Config struct {
AppKey string `yaml:"appKey" json:"appKey" bson:"appKey" validate:"required"`
ApiKey string `yaml:"apiKey" json:"apiKey" bson:"apiKey" validate:"required"`
Port int `yaml:"port" json:"port" bson:"port" validate:"required"`
IntervalSeconds int `yaml:"intervalSeconds" json:"intervalSeconds" bson:"intervalSeconds" validate:"required"`
Model string `yaml:"model" json:"model" bson:"model" validate:"required"`
MaxLength int `yaml:"maxLength" json:"maxLength" bson:"maxLength" validate:"required"`
Cors bool `yaml:"cors" json:"cors" bson:"cors" validate:""`
}

BIN
chatgpt-image.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

View File

@@ -1,11 +1,20 @@
# openai的appKey
appKey: "xxxxxx"
# Your openai.com API key
# openai的API Key
apiKey: "xxxxxx"
# Service port
# 服务端口
port: 9000
# The time interval for sending questions cannot be less than how long, unit: second
# 问题发送的时间间隔不能小于多长时间,单位:秒
intervalSeconds: 5
# GPT model, if you use the GPT4 model, please ensure that the corresponding openai account has the permission to use the GPT4 model
# Available models include: gpt-4-32k-0314, gpt-4-32k, gpt-4-0314, gpt-4, gpt-3.5-turbo-0301, gpt-3.5-turbo, text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, text-davinci-001, davinci-instruct-beta, davinci, curie-instruct-beta, curie, ada, babbage
# GPT模型如果使用GPT4模型请保证对应的openai账号有GPT4模型的使用权限
# 可用的模型包括: gpt-4-32k-0314, gpt-4-32k, gpt-4-0314, gpt-4, gpt-3.5-turbo-0301, gpt-3.5-turbo, text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, text-davinci-001, davinci-instruct-beta, davinci, curie-instruct-beta, curie, ada, babbage
model: gpt-3.5-turbo-0301
# The maximum length of the returned answer
# 返回答案的最大长度
maxLength: 2000
# Whether to allow cors cross-domain
# 是否允许cors跨域
cors: true

View File

@@ -1,7 +1,7 @@
version: "3"
services:
chatgpt-stream:
image: "doryengine/chatgpt-stream:v1.0.1"
image: "doryengine/chatgpt-stream:v1.0.3"
hostname: chatgpt-stream
container_name: chatgpt-stream
ports:
@@ -11,12 +11,13 @@ services:
- chatgpt-service
restart: always
chatgpt-service:
image: "doryengine/chatgpt-service:v1.0.1-alpine"
image: "doryengine/chatgpt-service:v1.0.3-alpine"
hostname: chatgpt-service
container_name: chatgpt-service
ports:
- "9000:9000"
volumes:
- ./config.yaml:/chatgpt-service/config.yaml
- ./assets:/chatgpt-service/assets
command: /chatgpt-service/chatgpt-service
restart: always

2
go.mod
View File

@@ -7,7 +7,7 @@ require (
github.com/gin-gonic/gin v1.8.2
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.5.0
github.com/sashabaranov/go-gpt3 v1.3.3
github.com/sashabaranov/go-openai v1.5.7
github.com/sirupsen/logrus v1.9.0
gopkg.in/yaml.v3 v3.0.1
)

16
main.go
View File

@@ -27,8 +27,19 @@ func main() {
logger.LogError(err.Error())
return
}
if config.AppKey == "" {
logger.LogError(fmt.Sprintf("appKey is empty"))
if config.ApiKey == "" {
logger.LogError(fmt.Sprintf("apiKey is empty"))
return
}
var found bool
for _, model := range chat.GPTModels {
if model == config.Model {
found = true
break
}
}
if !found {
logger.LogError(fmt.Sprintf("model not exists"))
return
}
@@ -45,6 +56,7 @@ func main() {
}
groupApi := r.Group("/api")
groupApi.Static("/assets", "assets")
groupWs := groupApi.Group("/ws")
groupWs.GET("chat", api.WsChat)