1.前语
咱们好,我是王老狮,近期OpenAI开放了chatGPT的最新gpt-3.5-turbo模型,据介绍该模型是和当时官网运用的相同的模型,如果你还没体会过ChatGPT,那么今日就教咱们怎么打破网络壁垒,打造一个属于自己的智能帮手把。本文包含API Key的申请以及网络署理的建立,那么刻不容缓,咱们现在开始。
2.对接流程
2.1.API-Key的获取
首先第一步要获取OpenAI接口的API Key,该Key是你用来调用接口的token,主要用于接口鉴权。获取该key首先要注册OpenAi的账号,详细能够见我的别的一篇文章,ChatGPT保姆级注册教程。
- 翻开platform.openai.com/网站,点击view API Key,
- 点击创立key
- 弹窗显示生成的key,记得把key复制,不然等会就找不到这个key了,只能重新创立。
将API Key保存好以备用
2.2.API用量的检查
这儿能够检查API的运用情况,新账号注册默许有5美元的试用额度,之前都是18美元,API本钱降了之后试用额度也狠狠地砍了一刀啊,哈哈。
2.3.中心代码完成
2.3.1.pom依靠
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.webtap</groupId>
<artifactId>webtap</artifactId>
<version>0.0.1</version>
<packaging>jar</packaging>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.2.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>nz.net.ultraq.thymeleaf</groupId>
<artifactId>thymeleaf-layout-dialect</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-mail</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.4</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
</dependency>
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.9.2</version>
</dependency>
<!-- alibaba.fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.56</version>
</dependency>
<dependency>
<groupId>net.sourceforge.nekohtml</groupId>
<artifactId>nekohtml</artifactId>
<version>1.9.22</version>
</dependency>
<dependency>
<groupId>com.github.pagehelper</groupId>
<artifactId>pagehelper-spring-boot-starter</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpasyncclient</artifactId>
<version>4.0.2</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore-nio</artifactId>
<version>4.3.2</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.3.5</version>
<exclusions>
<exclusion>
<artifactId>commons-codec</artifactId>
<groupId>commons-codec</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>commons-httpclient</groupId>
<artifactId>commons-httpclient</artifactId>
<version>3.1</version>
<exclusions>
<exclusion>
<artifactId>commons-codec</artifactId>
<groupId>commons-codec</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>com.github.ulisesbocchio</groupId>
<artifactId>jasypt-spring-boot-starter</artifactId>
<version>2.0.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
2.3.2.实体类ChatMessage.java
用于寄存发送的音讯信息,注解运用了lombok,如果没有运用lombok能够自动生成结构办法以及get和set办法
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ChatMessage {
//音讯人物
String role;
//音讯内容
String content;
}
2.3.3.实体类ChatCompletionRequest.java
用于发送的恳求的参数实体类,参数释义如下:
model
:挑选运用的模型,如gpt-3.5-turbo
messages
:发送的音讯列表
temperature
:温度,参数从0-2,越低表明越精准,越高表明越广发,答复的内容重复率越低
n
:回复条数,一次对话回复的条数
stream
:是否流式处理,就像ChatGPT一样的处理方式,会增量的发送信息。
max_tokens
:生成的答案答应的最大token数
user
:对话用户
@Data
@Builder
public class ChatCompletionRequest {
String model;
List<ChatMessage> messages;
Double temperature;
Integer n;
Boolean stream;
List<String> stop;
Integer max_tokens;
String user;
}
2.3.4.实体类ExecuteRet .java
用于接纳恳求回来的信息以及履行成果
/**
* 调用回来
*/
public class ExecuteRet {
/**
* 操作是否成功
*/
private final boolean success;
/**
* 回来的内容
*/
private final String respStr;
/**
* 恳求的地址
*/
private final HttpMethod method;
/**
* statusCode
*/
private final int statusCode;
public ExecuteRet(booleansuccess, StringrespStr, HttpMethodmethod, intstatusCode) {
this.success =success;
this.respStr =respStr;
this.method =method;
this.statusCode =statusCode;
}
@Override
public String toString() {
return String.format("[success:%s,respStr:%s,statusCode:%s]", success, respStr, statusCode);
}
/**
*@returnthe isSuccess
*/
public boolean isSuccess() {
return success;
}
/**
*@returnthe !isSuccess
*/
public boolean isNotSuccess() {
return !success;
}
/**
*@returnthe respStr
*/
public String getRespStr() {
return respStr;
}
/**
*@returnthe statusCode
*/
public int getStatusCode() {
return statusCode;
}
/**
*@returnthe method
*/
public HttpMethod getMethod() {
return method;
}
}
2.3.5.实体类ChatCompletionChoice .java
用于接纳ChatGPT回来的数据
@Data
public class ChatCompletionChoice {
Integer index;
ChatMessage message;
String finishReason;
}
2.3.6.接口调用中心类OpenAiApi .java
运用httpclient用于进行api接口的调用,支撑post和get办法恳求。
url为装备文件open.ai.url的值,表明调用api的地址:https://api.openai.com/
,token为获取的api-key。
履行post或许get办法时添加头部信息headers.put("Authorization", "Bearer " + token);
用于通过接口鉴权。
@Slf4j
@Component
public class OpenAiApi {
@Value("${open.ai.url}")
private String url;
@Value("${open.ai.token}")
private String token;
private static final MultiThreadedHttpConnectionManagerCONNECTION_MANAGER= new MultiThreadedHttpConnectionManager();
static {
// 默许单个host最大链接数
CONNECTION_MANAGER.getParams().setDefaultMaxConnectionsPerHost(
Integer.valueOf(20));
// 最大总连接数,默许20
CONNECTION_MANAGER.getParams()
.setMaxTotalConnections(20);
// 连接超时时间
CONNECTION_MANAGER.getParams()
.setConnectionTimeout(60000);
// 读取超时时间
CONNECTION_MANAGER.getParams().setSoTimeout(60000);
}
public ExecuteRet get(Stringpath, Map<String, String> headers) {
GetMethod method = new GetMethod(url +path);
if (headers== null) {
headers = new HashMap<>();
}
headers.put("Authorization", "Bearer " + token);
for (Map.Entry<String, String> h : headers.entrySet()) {
method.setRequestHeader(h.getKey(), h.getValue());
}
return execute(method);
}
public ExecuteRet post(Stringpath, Stringjson, Map<String, String> headers) {
try {
PostMethod method = new PostMethod(url +path);
//log.info("POST Url is {} ", url + path);
// 输出传入参数
log.info(String.format("POST JSON HttpMethod's Params = %s",json));
StringRequestEntity entity = new StringRequestEntity(json, "application/json", "UTF-8");
method.setRequestEntity(entity);
if (headers== null) {
headers = new HashMap<>();
}
headers.put("Authorization", "Bearer " + token);
for (Map.Entry<String, String> h : headers.entrySet()) {
method.setRequestHeader(h.getKey(), h.getValue());
}
return execute(method);
} catch (UnsupportedEncodingExceptionex) {
log.error(ex.getMessage(),ex);
}
return new ExecuteRet(false, "", null, -1);
}
public ExecuteRet execute(HttpMethodmethod) {
HttpClient client = new HttpClient(CONNECTION_MANAGER);
int statusCode = -1;
String respStr = null;
boolean isSuccess = false;
try {
client.getParams().setParameter(HttpMethodParams.HTTP_CONTENT_CHARSET, "UTF8");
statusCode = client.executeMethod(method);
method.getRequestHeaders();
// log.info("履行成果statusCode = " + statusCode);
InputStreamReader inputStreamReader = new InputStreamReader(method.getResponseBodyAsStream(), "UTF-8");
BufferedReader reader = new BufferedReader(inputStreamReader);
StringBuilder stringBuffer = new StringBuilder(100);
String str;
while ((str = reader.readLine()) != null) {
log.debug("逐行读取String = " + str);
stringBuffer.append(str.trim());
}
respStr = stringBuffer.toString();
if (respStr != null) {
log.info(String.format("履行成果String = %s, Length = %d", respStr, respStr.length()));
}
inputStreamReader.close();
reader.close();
// 回来200,接口调用成功
isSuccess = (statusCode == HttpStatus.SC_OK);
} catch (IOExceptionex) {
} finally {
method.releaseConnection();
}
return new ExecuteRet(isSuccess, respStr,method, statusCode);
}
}
2.3.7.界说接口常量类PathConstant.class
用于保护支撑的api接口列表
public class PathConstant {
public static class MODEL {
//获取模型列表
public static String MODEL_LIST = "/v1/models";
}
public static class COMPLETIONS {
public static String CREATE_COMPLETION = "/v1/completions";
//创立对话
public static String CREATE_CHAT_COMPLETION = "/v1/chat/completions";
}
}
2.3.8.接口调用调试单元测验类OpenAiApplicationTests.class
中心代码都已经预备结束,接下来写个单元测验测验下接口调用情况。
@SpringBootTest
@RunWith(SpringRunner.class)
public class OpenAiApplicationTests {
@Autowired
private OpenAiApi openAiApi;
@Test
public void createChatCompletion2() {
Scanner in = new Scanner(System.in);
String input = in.next();
ChatMessage systemMessage = new ChatMessage('user', input);
messages.add(systemMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-3.5-turbo-0301")
.messages(messages)
.user("testing")
.max_tokens(500)
.temperature(1.0)
.build();
ExecuteRet executeRet = openAiApi.post(PathConstant.COMPLETIONS.CREATE_CHAT_COMPLETION, JSONObject.toJSONString(chatCompletionRequest),
null);
JSONObject result = JSONObject.parseObject(executeRet.getRespStr());
List<ChatCompletionChoice> choices = result.getJSONArray("choices").toJavaList(ChatCompletionChoice.class);
System.out.println(choices.get(0).getMessage().getContent());
ChatMessage context = new ChatMessage(choices.get(0).getMessage().getRole(), choices.get(0).getMessage().getContent());
System.out.println(context.getContent());
}
}
- 运用Scanner 用于控制台输入信息,如果单元测验时控制台不能输入,那么进入IDEA的安装目录,修正以下文件。添加最后一行添加-Deditable.java.test.console=true即可。
-
创立ChatMessage目标,用于寄存参数,role有user,system,assistant,一般接口回来的响应为assistant人物,咱们一般运用user就好。
-
界说恳求参数ChatCompletionRequest,这儿咱们运用3.1日发布的最新模型gpt-3.5-turbo-0301。详细都有哪些模型咱们能够调用v1/model接口检查支撑的模型。
-
之后调用openAiApi.post进行接口的恳求,并将恳求成果转为JSON目标。取其中的choices字段转为ChatCompletionChoice目标,该目标是寄存api回来的详细信息。
接口回来信息格式如下:
{ "id": "chatcmpl-6rNPw1hqm5xMVMsyf6PXClRHtNQAI", "object": "chat.completion", "created": 1678179420, "model": "gpt-3.5-turbo-0301", "usage": { "prompt_tokens": 16, "completion_tokens": 339, "total_tokens": 355 }, "choices": [{ "message": { "role": "assistant", "content": "\n\nI. 介绍数字孪生的概念和背景\n A. 数字孪生的界说和意义\n B. 数字孪生的开展进程\n C. 数字孪生在现代工业的使用\n\nII. 数字孪生的构建办法\n A. 数字孪生的数据采集和处理\n B. 数字孪生的建模和仿真\n C. 数字孪生的验证和测验\n\nIII. 数字孪生的使用领域和案例剖析\n A. 制造业领域中的数字孪生使用\n B. 修建和城市领域中的数字孪生使用\n C. 医疗和健康领域中的数字孪生使用\n\nIV. 数字孪生的挑战和开展趋势\n A. 数字孪生的技能挑战\n B. 数字孪生的实践难点\n C. 数字孪生的未来开展趋势\n\nV. 结论和展望\n A. 总结数字孪生的意义和价值\n B. 展望数字孪生的未来开展趋势和研究方向" }, "finish_reason": "stop", "index": 0 }] }
-
输出对应的信息。
2.3.9.成果演示
2.4.接连对话完成
2.4.1接连对话的功能完成
根本接口调通之后,发现一次会话之后,没有回来完,输入继续又重新发起了新的会话。那么那么咱们该怎么完成联系上下文呢?其实只要做一些简略地改动,将每次对话的信息都保存到一个音讯列表中,这样问答就支撑上下文了,代码如下:
List<ChatMessage> messages = new ArrayList<>();
@Test
public void createChatCompletion() {
Scanner in = new Scanner(System.in);
String input = in.next();
while (!"exit".equals(input)) {
ChatMessage systemMessage = new ChatMessage(ChatMessageRole.USER.value(), input);
messages.add(systemMessage);
ChatCompletionRequest chatCompletionRequest = ChatCompletionRequest.builder()
.model("gpt-3.5-turbo-0301")
.messages(messages)
.user("testing")
.max_tokens(500)
.temperature(1.0)
.build();
ExecuteRet executeRet = openAiApi.post(PathConstant.COMPLETIONS.CREATE_CHAT_COMPLETION, JSONObject.toJSONString(chatCompletionRequest),
null);
JSONObject result = JSONObject.parseObject(executeRet.getRespStr());
List<ChatCompletionChoice> choices = result.getJSONArray("choices").toJavaList(ChatCompletionChoice.class);
System.out.println(choices.get(0).getMessage().getContent());
ChatMessage context = new ChatMessage(choices.get(0).getMessage().getRole(), choices.get(0).getMessage().getContent());
messages.add(context);
in = new Scanner(System.in);
input = in.next();
}
}
因为OpenAi的/v1/chat/completions接口音讯参数是个list,这个是用来保存咱们的上下文的,因而咱们只要将每次对话的内容用list进行保存即可。
2.4.2成果如下:
4.常见问题
4.1.OpenAi接口调用不通
因为https://api.openai.com/
地址也被限制了,可是接口没有对区域做校验,因而能够自己建立一个香港署理,也能够走科学上网。
我选用的是香港署理的模式,一了百了,详细署理装备流程如下:
- 购买一台香港的虚拟机,横竖以后都会用得到,作为开发者建议搞一个。搞活动的时候新人很便宜,根本3年的才200块钱。
- 拜访nginx.org/download/ng… 下载最新版nginx
- 部署nginx并修正/nginx/config/nginx.conf文件,装备接口署理途径如下
server {
listen 19999;
server_name ai;
ssl_certificate /usr/local/nginx/ssl/server.crt;
ssl_certificate_key /usr/local/nginx/ssl/server.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
#charset koi8-r;
location /v1/ {
proxy_pass <https://api.openai.com>;
}
}
- 启动nginx
- 将接口拜访地址改为nginx的机器出口IP+端口即可
如果署理装备咱们还不了解,能够留下评论我单独出一期教程。
4.2.接口回来401
检查恳求办法是否添加token字段以及key是否正确
5.总结
至此JAVA对OpenAI对接就已经完成了,而且也支撑接连对话,咱们能够在此基础上不断地完善和桥接到web服务,定制自己的ChatGPT帮手了。我自己也建立了个渠道,不断地在完善中,详细可见下图,后续会开源出来,想要体会的能够私信我获取地址和账号哈
本文正在参加「金石计划」