Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs]: Can the PyTorch JIT model be converted to an OpenVINO model in C++? #28290

Open
1 task done
hujhcv opened this issue Jan 7, 2025 · 3 comments
Open
1 task done
Assignees

Comments

@hujhcv
Copy link

hujhcv commented Jan 7, 2025

Documentation link

https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-pytorch.html

Description

In Python, the following code can be used to convert the PyTorch JIT model to an OpenVINO model.

import torchvision
import torch
import openvino as ov

model = torchvision.models.resnet50(weights='DEFAULT')
ov_model = ov.convert_model(model)

Our project does not want to use Python; is there a corresponding function in the C++ version of OpenVINO that can convert the PyTorch JIT model to an OpenVINO model?

Issue submission checklist

  • I'm reporting a documentation issue. It's not a question.
@mvafin
Copy link
Contributor

mvafin commented Jan 7, 2025

@hujhcv Unfortunately there is no such function, we rely on the torch to read the graph of the model in Python.
Could you tell more why you need this?
Do you get JIT model in C++ too? If no then why you cannot convert model with openvino in the same time?

@hujhcv
Copy link
Author

hujhcv commented Jan 7, 2025

@hujhcv Unfortunately there is no such function, we rely on the torch to read the graph of the model in Python. Could you tell more why you need this? Do you get JIT model in C++ too? If no then why you cannot convert model with openvino in the same time?

Thank you for your reply. We want to develop software that allows users to perform fine-tuning training by themselves. Python requires setting up a complex environment, and the execution speed is slower than C++, so we prefer not to use Python. We would like to implement model fine-tuning training using LibTorch in C++, and we hope to export the trained model to OpenVINO IR directly in C++, so that we can avoid using Python.

@slyalin
Copy link
Contributor

slyalin commented Jan 8, 2025

@hujhcv, the current importing flow for PyTorch in OpenVINO relies on Python as @mvafin mentioned, but the core part of PyTorch front-end is written in C++. So the translation code that converts PyTorch ops semantics to OpenVINO ops semantics is implemented in C++. And the entry dialect is TorchScript from the PyTorch side. So, we can re-implement that part that still relies on Python to pure C++, and it will enable your scenario.

Here is the base class that provides an API for that part: https://github.com/openvinotoolkit/openvino/blob/master/src/frontends/pytorch/include/openvino/frontend/pytorch/decoder.hpp, it is called TorchDecoder. The Python implementation of that API which should be ported to C++ is here: https://github.com/openvinotoolkit/openvino/blob/master/src/bindings/python/src/openvino/frontend/pytorch/ts_decoder.py.

As you can see that parts contain quite amount of code and, specifically for PyTorch, we haven't had experience to integrate the decoder class implementation in C++ for a long time: there was a prototype at early stage of development, but since then it has been evolving purely in Python, so I would expect surprises when porting it to C++. As we don't see strong demand from the community to support that flow, it is unlikely that the core team will invest in this implementation. But we can help you by providing consultations if you are ready to add this flow as an external contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants