diff --git a/README.md b/README.md index ddd8431..1471e35 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ An extensive anira usage guide can be found [here](docs/anira-usage.md). The basic usage of anira is as follows: ```cpp -#include +#include // Create a model configuration struct for your neural network anira::InferenceConfig myNNConfig( diff --git a/docs/anira-usage.md b/docs/anira-usage.md index 5e5698a..27379ec 100644 --- a/docs/anira-usage.md +++ b/docs/anira-usage.md @@ -16,6 +16,8 @@ To use anira in your real-time audio application, you need to create instances f Start by specifying your model configuration using ``anira::InferenceConfig``. This includes the model path, input/output forms, batch size, and other critical settings that match the requirements of your model. When using a single backend, you define the model path and input/output shapes only once. ```cpp +#include + anira::InferenceConfig hybridNNConfig( // Model path and shapes for different backends #ifdef USE_LIBTORCH @@ -66,7 +68,7 @@ If your model requires costum pre- or post-processing, you can inherit from the When your pre- and post-processing requires to access values from the ```anira::InferenceConfig``` struct, you can store the config as a member in your custom pre- and post-processor class. Here is an example of a custom pre- and post-processor. The config myConfig is provided in the "MyConfig.h" file. ```cpp -#include +#include #include "MyConfig.h" class MyPrePostProcessor : public anira::PrePostProcessor {