Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: Exception from [CPU] Loop node with name 'Loop.0' at src/plugins/intel_cpu/src/nodes/tensoriterator.cpp:182 #28235

Open
3 tasks done
fxwfzsxyq opened this issue Dec 31, 2024 · 11 comments
Assignees
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin PSE support_request

Comments

@fxwfzsxyq
Copy link

OpenVINO Version

2024.6.0

Operating System

Ubuntu 18.04 (LTS)

Device used for inference

CPU

Framework

ONNX

Model used

slanet_plus

Issue description

convert paddlepaddle model to onnx model, use openvino to inference, but get error

Step-by-step reproduction

model_path: https://paddle-model-ecology.bj.bcebos.com/paddlex/official_inference_model/paddle3.0b2/SLANet_plus_infer.tar

use paddle2onnx to convert model to onnx model

inference code:

class TableRecOpenvino():

    def __init__(self, model_path):
        super(TableRecOpenvino, self).__init__()

#         core = Core()
#         model = core.read_model(model=model_path,weights=model_path.replace('.xml','.bin'))
#         self.net = core.compile_model(model=model, device_name="CPU")
        
#         ie = IECore()
#         self.net = ie.read_network(model=model_path,weights=model_path.replace('.xml','.bin'))

        ov_model = ov.convert_model("./weights/slanet-plus.onnx")
        self.net = ov.compile_model(ov_model, "AUTO")
        
        self.preprocess_op = TablePreprocess()
        
        self.postprocess_op = TableLabelDecode(slanet_plus_character)

    def async_infer(self, img):
        infer_request = self.net.create_infer_request()
        # 设置输入数据
        infer_request.set_input_tensor(Tensor(img))
        # 启动异步推理
        infer_request.start_async()
        # 等待推理完成
        infer_request.wait()  
        
        # 获取多个输出张量
        output_tensors = [infer_request.get_output_tensor(i) for i in range(len(self.net.outputs))]
        
        # 提取输出数据并返回
        output_data = [output_tensor.data for output_tensor in output_tensors]

        return output_data

    
    def rec_img(self, img):
        import pdb
        pdb.set_trace()
        starttime = time.time()
        data = {"image": img}
        data = self.preprocess_op(data)
        img = data[0]
        if img is None:
            return None, 0
        img = np.expand_dims(img, axis=0)
        img_data = img.copy()
        
        outputs = self.async_infer(img_data)
        
        preds = {"loc_preds": outputs[0], "structure_probs": outputs[1]}

        shape_list = np.expand_dims(data[-1], axis=0)
        post_result = self.postprocess_op(preds, [shape_list])

        bbox_list = post_result["bbox_batch_list"][0]

        structure_str_list = post_result["structure_batch_list"][0]
        structure_str_list = structure_str_list[0]
        structure_str_list = (
            ["<html>", "<body>", "<table>"]
            + structure_str_list
            + ["</table>", "</body>", "</html>"]
        )
        elapse = time.time() - starttime
        return structure_str_list, adapt_bboxes(bbox_list), elapse

error:

RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:245:
Exception from src/bindings/python/src/pyopenvino/core/infer_request.hpp:54:
Caught exception: Exception from src/plugins/intel_cpu/src/node.cpp:746:
[CPU] Loop node with name 'Loop.0' Check 'mem->getShape() == Shape(VectorDims{1})' failed at src/plugins/intel_cpu/src/nodes/tensoriterator.cpp:182

Relevant log output

No response

Issue submission checklist

  • I'm reporting an issue. It's not a question.
  • I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
  • There is reproducer code and related data files such as images, videos, models, etc.
@fxwfzsxyq fxwfzsxyq added bug Something isn't working support_request labels Dec 31, 2024
@fxwfzsxyq
Copy link
Author

I also tried converting the model to OpenVINO's XML and BIN files for loading. The model converts successfully, but it still throws the same error during inference.

@Iffa-Intel
Copy link

@fxwfzsxyq Could you share:

  1. Steps that you did for the conversion
  2. Relevant model files (especially the one that you mentioned to work, the xml and bin format)
  3. Official link to that paddle model
  4. Any modification that you did to the model

@fxwfzsxyq
Copy link
Author

fxwfzsxyq commented Jan 6, 2025

@Iffa-Intel I haven't modified the model. I simply used Paddle2ONNX to convert the Paddle model to ONNX, and then used the official command to convert the ONNX model to OpenVINO. When performing inference, I encounter this error. The ONNX model loads and works fine, but when using the converted OpenVINO model, it loads successfully but throws an error during inference. I find this error quite strange. The model can be converted, but an error occurs during the actual forward prediction. It seems like the issue is related to a dimension mismatch.

@fxwfzsxyq
Copy link
Author

fxwfzsxyq commented Jan 6, 2025

@Iffa-Intel
Copy link

Iffa-Intel commented Jan 6, 2025

@fxwfzsxyq From my side, it seems that the model has shape issue:

image image

Even when I tried to convert with a static shape:
image

Are there any specific reason that you want to use OpenVINO IR format (xml and bin) instead of ONNX, since OpenVINO can read ONNX models directly?

@fxwfzsxyq
Copy link
Author

@Iffa-Intel My point is that the model can be used within the ONNX framework, utilizing ONNX Runtime for inference. However, I would like to run it using the OpenVINO framework on the CPU, as the speed is expected to be faster than ONNX on the CPU. The model's input shape should be (1, 3, 488, 488). I suspect that the issue might be due to rounding in the OpenVINO framework's implementation of RNNs, because this shape is not divisible by 16.

@Iffa-Intel
Copy link

@fxwfzsxyq we'll further investigate this and get back to you with a possible workaround.

@Iffa-Intel Iffa-Intel added the PSE label Jan 7, 2025
@fxwfzsxyq
Copy link
Author

@Iffa-Intel Sure, looking forward to your reply.

@wenjiew wenjiew changed the title [Bug]: RuntimeError: Exception from src/inference/src/cpp/infer_request.cpp:245: Exception from src/bindings/python/src/pyopenvino/core/infer_request.hpp:54: Caught exception: Exception from src/plugins/intel_cpu/src/node.cpp:746: [CPU] Loop node with name 'Loop.0' Check 'mem->getShape() == Shape(VectorDims{1})' failed at src/plugins/intel_cpu/src/nodes/tensoriterator.cpp:182 [Bug]: RuntimeError: Exception from [CPU] Loop node with name 'Loop.0' at src/plugins/intel_cpu/src/nodes/tensoriterator.cpp:182 Jan 8, 2025
@yuxu42
Copy link
Contributor

yuxu42 commented Jan 8, 2025

I also tried converting the model to OpenVINO's XML and BIN files for loading. The model converts successfully, but it still throws the same error during inference.

Hi @fxwfzsxyq have you tried to directly use the Paddle model as input of OpenVINO for inference?
Doc: https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html#step-2-compile-the-model

@fxwfzsxyq
Copy link
Author

@yuxu42 This will still cause an error, and the error is the same as the one encountered by @Iffa-Intel . The error is as follows:
image

@fxwfzsxyq
Copy link
Author

I opened the ONNX model using Netron, and the error may occurs at the following location:
image
I'm not sure if OpenVINO does not support this. I've converted many models from ONNX to OpenVINO before, and this is the first time I've encountered this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: CPU OpenVINO CPU plugin PSE support_request
Projects
None yet
Development

No branches or pull requests

7 participants