2023-12-19
Paddle
0
请注意,本文编写于 400 天前,最后修改于 400 天前,其中某些信息可能已经过时。

目录

ai-paddle部署
paddle serving
尝试1
尝试2
尝试3
资料

ai-paddle部署

关键词:[[2023 id=643c3078-da79-43c8-b90e-547053ed8ea1]] [[实践类文档 id=44b548e9-964e-4d5b-b1e9-a94b5c19c1a5]] [[轻度记忆 id=0c4a4ed4-5c2b-40e9-bb93-f666be8b4ba1]]

paddle提供两种服务器部署方式,paddlehub和paddle serving。

paddlehub只支持图像分类,不支持图像检测。

信息来源:

paddle serving

尝试1

按照paddle clas的部署说明文档部署,在mac下用docker部署失败。link如下。原因在此:[[noavx id=9d7e554d-1070-4254-8349-aa1ecb42e197]]

https://gitee.com/paddlepaddle/PaddleClas/tree/release/2.5/deploy/paddleserving

待办:在linux上还未尝试。

尝试2

按照paddle serving说明文档、paddle clas的部署说明文档部署。在华为云 linux部署成功。link如下

https://gitee.com/paddlepaddle/PaddleClas/tree/release/2.5/deploy/paddleserving

https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md

  1. docker 安装,参考[[docker-安装 id=e84d38fb-02d8-4bf8-a727-c83aa4bf1d98]]
  2. 拉取镜像,运行镜像。注意端口9292,映射到web端口,后面会用到
bash
docker pull paddlepaddle/serving:0.7.0-devel docker run -p 9292:9292 --name test -dit paddlepaddle/serving:0.7.0-devel bash ##进入docker docker exec -it test /bin/bash # 通过docker ps查看 test
  1. 进入docker后安装依赖包 您可能需要使用国内镜像源(例如清华源, 在pip命令中添加-i https://pypi.tuna.tsinghua.edu.cn/simple)来加速下载。
bash
python3.7 -m pip install paddle-serving-client==0.7.0 python3.7 -m pip install paddle-serving-app==0.7.0 python3.7 -m pip install faiss-cpu==1.7.1post2 #若为CPU部署环境: python3.7 -m pip install paddle-serving-server==0.7.0 # CPU python3.7 -m pip install paddlepaddle==2.2.0 # CPU
  1. 模型转换
  • 下载并解压 ResNet50_vd 的 inference 模型:
bash
# 下载 ResNet50_vd inference 模型 wget -nc https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar # 解压 ResNet50_vd inference 模型 tar xf ResNet50_vd_infer.tar
  • 用 paddle_serving_client 命令把下载的 inference 模型转换成易于 Server 部署的模型格式:
bash
# 转换 ResNet50_vd 模型 python3.7 -m paddle_serving_client.convert \ --dirname ./ResNet50_vd_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server ./ResNet50_vd_serving/ \ --serving_client ./ResNet50_vd_client/

目录结构为:

Plain
├── ResNet50_vd_serving/ │ ├── inference.pdiparams │ ├── inference.pdmodel │ ├── serving_server_conf.prototxt │ └── serving_server_conf.stream.prototxt │ └── ResNet50_vd_client/ ├── serving_client_conf.prototxt └── serving_client_conf.stream.prototxt

Serving 为了兼容不同模型的部署,提供了输入输出重命名的功能。让不同的模型在推理部署时,只需要修改配置文件的 alias_name 即可,无需修改代码即可完成推理部署。因此在转换完毕后需要分别修改 ResNet50_vd_serving 下的文件 serving_server_conf.prototxt 和 ResNet50_vd_client 下的文件 serving_client_conf.prototxt,将 fetch_var 中 alias_name: 后的字段改为 prediction,修改后的 serving_server_conf.prototxt 和 serving_client_conf.prototxt 如下所示:

Plain
feed_var { name: "inputs" alias_name: "inputs" is_lod_tensor: false feed_type: 1 shape: 3 shape: 224 shape: 224 } fetch_var { name: "save_infer_model/scale_0.tmp_1" alias_name: "prediction" is_lod_tensor: false fetch_type: 1 shape: 1000 }
  1. 部署

获取paddleserving 目录,在PaddleClas仓库的deploy/paddleserving,最好

bash
git clone https://gitee.com/paddlepaddle/PaddleClas.git -b release/2.5

paddleserving 目录包含了启动 pipeline 服务、C++ serving服务和发送预测请求的代码,主要包括:

Plain
__init__.py classification_web_service.py # 启动pipeline服务端的脚本 config.yml # 启动pipeline服务的配置文件 pipeline_http_client.py # http方式发送pipeline预测请求的脚本 pipeline_rpc_client.py # rpc方式发送pipeline预测请求的脚本 paddle2onnx.md # 分类模型服务化部署文档 run_cpp_serving.sh # 启动C++ Serving部署的脚本 test_cpp_serving_client.py # rpc方式发送C++ serving预测请求的脚本

修改config.yml, 修改端口、fetch_list、uci模型路径

yaml
#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG ##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num worker_num: 1 #http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port http_port: 9292 #rpc_port: 9993 dag: #op资源类型, True, 为线程模型;False,为进程模型 is_thread_op: False op: imagenet: #并发数,is_thread_op=True时,为线程并发;否则为进程并发 concurrency: 1 #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置 local_service_conf: #uci模型路径 model_config: ../ResNet50_vd_serving #计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu device_type: 1 #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡 devices: "0" # "0,1" #client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测 client_type: local_predictor #Fetch结果列表,以client_config中fetch_var的alias_name为准 fetch_list: ["prediction"]

启动服务:

Plain
# 启动服务,运行日志保存在 log.txt python3.7 classification_web_service.py &>log.txt &

发送请求:修改pipeline_http_client.py文件,修改端口为9292。

Plain
# 发送服务请求 python3.7 pipeline_http_client.py

成功运行后,模型预测的结果会打印在客户端中,如下所示:

Plain
{'err_no': 0, 'err_msg': '', 'key': ['label', 'prob'], 'value': ["['daisy']", '[0.9341402053833008]'], 'tensors': []}
  • 关闭服务 如果服务程序在前台运行,可以按下Ctrl+C来终止服务端程序;如果在后台运行,可以使用kill命令关闭相关进程,也可以在启动服务程序的路径下执行以下命令来终止服务端程序:
Plain
python3.7 -m paddle_serving_server.serve stop

执行完毕后出现Process stopped信息表示成功关闭服务。

  1. 请求 通过postman请求成功。

image 自己写的将图像变成base64代码:

python
import sys import json import base64 def cv2_to_base64(image_path): with open(image_path, 'rb') as file: image_data = file.read() return base64.b64encode(image_data).decode('utf8') if __name__ == "__main__": if len(sys.argv) < 2: print("请提供图片路径作为参数") sys.exit(1) image_path = sys.argv[1] image_base64 = cv2_to_base64(image_path) data = {"key": ["image"], "value": [image_base64]} print(json.dumps(data))

也可以通过在线工具转换:https://www.base64-image.de/ 但是要求头部标识文件类型的字符串去掉:【data

/jpeg;base64,】

尝试3

基于图像识别的智慧零售商品识别:

应用在aistudio部署成功:https://aistudio.baidu.com/projectdetail/3460304

在华为云centos 8.2 部署失败,请求时间长。参考:https://aistudio.baidu.com/projectdetail/3460304

华为云centos 8.2 docker部署成功,参考https://gitee.com/paddlepaddle/PaddleClas/blob/release/2.5/docs/zh_CN/deployment/PP-ShiTu/paddle_serving.md

关键脚本:

bash
python3.7 -m pip install paddle-serving-client==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddle-serving-app==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install faiss-cpu==1.7.1post2 -i https://pypi.tuna.tsinghua.edu.cn/simple python3.7 -m pip install paddle-serving-server==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple # CPU python3.7 -m pip install paddlepaddle==2.2.0 -i https://pypi.tuna.tsinghua.edu.cn/simple # CPU
bash
##/bin/bash #1.下载paddleclas mkdir -p /home/aistudio && cd /home/aistudio git clone https://gitee.com/paddlepaddle/PaddleClas.git -b release/2.5 cd PaddleClas/deploy #2.下载模型 # 创建并进入models文件夹 mkdir models cd models # 下载并解压通用识别模型 wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/PP-ShiTuV2/general_PPLCNetV2_base_pretrained_v1.0_infer.tar tar -xf general_PPLCNetV2_base_pretrained_v1.0_infer.tar # 下载并解压通用检测模型 wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar
bash
cd /home/aistudio/PaddleClas/deploy # 转换通用识别模型 python3.7 -m paddle_serving_client.convert \ --dirname /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_serving/ \ --serving_client /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_client/ # 转换通用检测模型 python3.7 -m paddle_serving_client.convert --dirname /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ --serving_client /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ # 下载构建完成的检索库 index wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar # 解压构建完成的检索库 index tar -xf drink_dataset_v2.0.tar echo "请修改下列文件alias_name" echo "/home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_serving/serving_server_conf.prototxt" echo "/home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_client/serving_client_conf.prototxt"

在aistudio上的依赖:

bash
!python3 -m pip install paddle-serving-client==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple !python3 -m pip install paddle-serving-app==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple !python3 -m pip install faiss-cpu==1.7.1post2 -i https://pypi.tuna.tsinghua.edu.cn/simple !python3 -m pip install paddle-serving-server==0.7.0 -i https://pypi.tuna.tsinghua.edu.cn/simple # CPU #1.下载paddleclas !mkdir -p /home/aistudio && cd /home/aistudio !git clone https://gitee.com/paddlepaddle/PaddleClas.git -b release/2.5 %cd PaddleClas/deploy #2.下载模型 # 创建并进入models文件夹 !mkdir models %cd models # 下载并解压通用识别模型 !wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/PP-ShiTuV2/general_PPLCNetV2_base_pretrained_v1.0_infer.tar !tar -xf general_PPLCNetV2_base_pretrained_v1.0_infer.tar # 下载并解压通用检测模型 !wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/models/inference/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar !tar -xf picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer.tar %cd /home/aistudio/PaddleClas/deploy # 转换通用识别模型 !python3 -m paddle_serving_client.convert \ --dirname /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_serving/ \ --serving_client /home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_client/ # 转换通用检测模型 !python3.7 -m paddle_serving_client.convert --dirname /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \ --model_filename inference.pdmodel \ --params_filename inference.pdiparams \ --serving_server /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \ --serving_client /home/aistudio/PaddleClas/deploy/models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ # 下载构建完成的检索库 index !wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/rec/data/drink_dataset_v2.0.tar # 解压构建完成的检索库 index !tar -xf drink_dataset_v2.0.tar !echo "请修改下列文件alias_name为features" !echo "/home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_serving/serving_server_conf.prototxt" !echo "/home/aistudio/PaddleClas/deploy/models/general_PPLCNetV2_base_pretrained_v1.0_client/serving_client_conf.prototxt"

web_recognition_service.py,增加日志,返回切割图片:

python
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import base64 import json import logging import os import pickle #import sys import cv2 import faiss import numpy as np from paddle_serving_app.reader import BGR2RGB from paddle_serving_app.reader import Div from paddle_serving_app.reader import Normalize from paddle_serving_app.reader import RCNNPostprocess from paddle_serving_app.reader import Resize from paddle_serving_app.reader import Sequential from paddle_serving_app.reader import Transpose from paddle_serving_server.web_service import Op, WebService logger = logging.getLogger(__name__) logger.setLevel(level=logging.DEBUG) # FileHandler file_handler = logging.FileHandler('output.log') file_handler.setLevel(level=logging.DEBUG) formatter = logging.Formatter('%(asctime)s - %(name)s - %(message)s') formatter.datefmt='%Y-%m-%d %H:%M:%S' file_handler.setFormatter(formatter) logger.addHandler(file_handler) class DetOp(Op): def init_op(self): self.img_preprocess = Sequential([ BGR2RGB(), Div(255.0), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False), Resize((640, 640)), Transpose((2, 0, 1)) ]) self.img_postprocess = RCNNPostprocess("label_list.txt", "output") self.threshold = 0.2 self.max_det_results = 5 def generate_scale(self, im): """ Args: im (np.ndarray): image (np.ndarray) Returns: im_scale_x: the resize ratio of X im_scale_y: the resize ratio of Y """ target_size = [640, 640] origin_shape = im.shape[:2] resize_h, resize_w = target_size im_scale_y = resize_h / float(origin_shape[0]) im_scale_x = resize_w / float(origin_shape[1]) return im_scale_y, im_scale_x def preprocess(self, input_dicts, data_id, log_id): logger.info(f"{log_id} | det | preprocess | input_dicts: {str(input_dicts)[:100]}") (_, input_dict), = input_dicts.items() imgs = [] raw_imgs = [] for key in input_dict.keys(): data = base64.b64decode(input_dict[key].encode('utf8')) raw_imgs.append(data) data = np.fromstring(data, np.uint8) raw_im = cv2.imdecode(data, cv2.IMREAD_COLOR) im_scale_y, im_scale_x = self.generate_scale(raw_im) im = self.img_preprocess(raw_im) im_shape = np.array(im.shape[1:]).reshape(-1) scale_factor = np.array([im_scale_y, im_scale_x]).reshape(-1) imgs.append({ "image": im[np.newaxis, :], "im_shape": im_shape[np.newaxis, :], "scale_factor": scale_factor[np.newaxis, :], }) self.raw_img = raw_imgs feed_dict = { "image": np.concatenate( [x["image"] for x in imgs], axis=0), "im_shape": np.concatenate( [x["im_shape"] for x in imgs], axis=0), "scale_factor": np.concatenate( [x["scale_factor"] for x in imgs], axis=0) } return feed_dict, False, None, "" def postprocess(self, input_dicts, fetch_dict, data_id, log_id): boxes = self.img_postprocess(fetch_dict, visualize=False) boxes.sort(key=lambda x: x["score"], reverse=True) boxes = filter(lambda x: x["score"] >= self.threshold, boxes[:self.max_det_results]) boxes = list(boxes) for i in range(len(boxes)): boxes[i]["bbox"][2] += boxes[i]["bbox"][0] - 1 boxes[i]["bbox"][3] += boxes[i]["bbox"][1] - 1 result = json.dumps(boxes) res_dict = {"bbox_result": result, "image": self.raw_img} logger.info(f"{log_id} | det | postprocess | res_dict:{str(res_dict)[:100]}") return res_dict, None, "" class RecOp(Op): def init_op(self): self.seq = Sequential([ BGR2RGB(), Resize((224, 224)), Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False), Transpose((2, 0, 1)) ]) index_dir = "../../drink_dataset_v2.0/index" assert os.path.exists(os.path.join( index_dir, "vector.index")), "vector.index not found ..." assert os.path.exists(os.path.join( index_dir, "id_map.pkl")), "id_map.pkl not found ... " self.searcher = faiss.read_index( os.path.join(index_dir, "vector.index")) with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd: self.id_map = pickle.load(fd) self.rec_nms_thresold = 0.05 self.rec_score_thres = 0.3 self.feature_normalize = True self.return_k = 1 def preprocess(self, input_dicts, data_id, log_id): (_, input_dict), = input_dicts.items() raw_img = input_dict["image"][0] data = np.frombuffer(raw_img, np.uint8) origin_img = cv2.imdecode(data, cv2.IMREAD_COLOR) dt_boxes = input_dict["bbox_result"] boxes = json.loads(dt_boxes) boxes.append({ "category_id": 0, "score": 1.0, "bbox": [0, 0, origin_img.shape[1], origin_img.shape[0]] }) self.det_boxes = boxes cut_img_base64=[] # construct batch images for rec imgs = [] for box in boxes: box = [int(x) for x in box["bbox"]] im = origin_img[box[1]:box[3], box[0]:box[2]].copy() #cv2.imwrite(str(aa) +'.jpg',im) # 将图像数据转换为字节流 _, img_encoded = cv2.imencode('.jpg',im) # 选择适当的图像格式 # 将字节流转换为 Base64 编码的字符串 base64_str = base64.b64encode(img_encoded).decode('utf-8') cut_img_base64.append(base64_str) img = self.seq(im) imgs.append(img[np.newaxis, :].copy()) self.det_cut_imgs=cut_img_base64 input_imgs = np.concatenate(imgs, axis=0) logger.info(f"{log_id} | rec | preprocess | input_imgs: {str(input_imgs)[:100]}") return {"x": input_imgs}, False, None, "" def nms_to_rec_results(self, results, thresh=0.1): filtered_results = [] x1 = np.array([r["bbox"][0] for r in results]).astype("float32") y1 = np.array([r["bbox"][1] for r in results]).astype("float32") x2 = np.array([r["bbox"][2] for r in results]).astype("float32") y2 = np.array([r["bbox"][3] for r in results]).astype("float32") scores = np.array([r["rec_scores"] for r in results]) areas = (x2 - x1 + 1) * (y2 - y1 + 1) order = scores.argsort()[::-1] while order.size > 0: i = order[0] xx1 = np.maximum(x1[i], x1[order[1:]]) yy1 = np.maximum(y1[i], y1[order[1:]]) xx2 = np.minimum(x2[i], x2[order[1:]]) yy2 = np.minimum(y2[i], y2[order[1:]]) w = np.maximum(0.0, xx2 - xx1 + 1) h = np.maximum(0.0, yy2 - yy1 + 1) inter = w * h ovr = inter / (areas[i] + areas[order[1:]] - inter) inds = np.where(ovr <= thresh)[0] order = order[inds + 1] filtered_results.append(results[i]) return filtered_results def postprocess(self, input_dicts, fetch_dict, data_id, log_id): batch_features = fetch_dict["features"] logger.info(f"{log_id} | rec | postprocess | batch_features:{str(batch_features)[:100]}") if self.feature_normalize: feas_norm = np.sqrt( np.sum(np.square(batch_features), axis=1, keepdims=True)) batch_features = np.divide(batch_features, feas_norm) scores, docs = self.searcher.search(batch_features, self.return_k) logger.info(f"{log_id} | rec | postprocess | scores:{scores}") logger.info(f"{log_id} | rec | postprocess | docs:{docs}") results = [] for i in range(scores.shape[0]): pred = {} if scores[i][0] >= self.rec_score_thres: pred["bbox"] = [int(x) for x in self.det_boxes[i]["bbox"]] pred["rec_docs"] = self.id_map[docs[i][0]].split()[1] pred["rec_scores"] = scores[i][0] pred["index"]=str(i) pred["format"]="jpeg" pred["image"]="data:image/jpeg;base64," + self.det_cut_imgs[i] results.append(pred) # do NMS results = self.nms_to_rec_results(results, self.rec_nms_thresold) logger.info(f"{log_id} | rec | postprocess | results: {results}") return {"result": str(results)}, None, "" class RecognitionService(WebService): def get_pipeline_response(self, read_op): det_op = DetOp(name="det", input_ops=[read_op]) rec_op = RecOp(name="rec", input_ops=[det_op]) return rec_op product_recog_service = RecognitionService(name="recognition") product_recog_service.prepare_pipeline_config("config.yml") product_recog_service.run_service()

资料

nameurl
仅仅是镜像https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Docker_Images_CN.mdpaddle serving 所有镜像列表部
部署文档1> https://gitee.com/paddlepaddle/PaddleClas/tree/release/2.5/deploy/paddleserving
部署文档2> https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Install_CN.md
华为云GPU服务器使用PaddleClas和PaddleServing训练、部署车辆类型分类模型服务https://blog.csdn.net/loutengyuan/article/details/126674945未尝试

本文作者:问海

本文链接:

版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!