diff --git a/autograd.html b/autograd.html index e313abf5..045f711a 100644 --- a/autograd.html +++ b/autograd.html @@ -36,7 +36,7 @@
vtl
- 0.2.0 3db7b44 + 0.2.0 3f725fe
- +
  • fn add_gate
    diff --git a/datasets.html b/datasets.html index 11c86787..b3cda13e 100644 --- a/datasets.html +++ b/datasets.html @@ -36,7 +36,7 @@
    vtl
    - 0.2.0 3db7b44 + 0.2.0 3f725fe

    datasets #

    -

    VTL Datasets

    This module exposes some functionalities to download and use the VTL datasets. For this we created some batch based iterators to load the datasets.We expose the following datasets:

    • Mnist: A dataset of handwritten digits. - Imdb: A dataset of IMDB reviews for sentiment analysis.
    +

    VTL Datasets

    This module exposes some functionalities to download and use the VTL datasets. For this we created some batch based iterators to load the datasets. We expose the following datasets:

    • Mnist: A dataset of handwritten digits.
    • Imdb: A dataset of IMDB reviews for sentiment analysis.
    @@ -121,56 +121,56 @@

    VTL Datasets

    This module exposes some functionalities to download and

    -const imdb_folder_name = 'aclImdb'
    +const mnist_test_labels_file = 't10k-labels-idx1-ubyte.gz'
    -const imdb_file_name ='${imdb_folder_name}_v1.tar.gz'
    +const mnist_test_images_file = 't10k-images-idx3-ubyte.gz'
    -const imdb_base_url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
    +const mnist_train_labels_file = 'train-labels-idx1-ubyte.gz'
    -const mnist_base_url = 'http://yann.lecun.com/exdb/mnist/'
    +const mnist_train_images_file = 'train-images-idx3-ubyte.gz'
    -const mnist_train_images_file = 'train-images-idx3-ubyte.gz'
    +const mnist_base_url = 'http://yann.lecun.com/exdb/mnist/'
    -const mnist_train_labels_file = 'train-labels-idx1-ubyte.gz'
    +const imdb_base_url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
    -const mnist_test_images_file = 't10k-images-idx3-ubyte.gz'
    +const imdb_file_name ='${imdb_folder_name}_v1.tar.gz'
    -const mnist_test_labels_file = 't10k-labels-idx1-ubyte.gz'
    +const imdb_folder_name = 'aclImdb'
    @@ -220,7 +220,7 @@

    VTL Datasets

    This module exposes some functionalities to download and -

    +
    • README
    • Constants
    • diff --git a/index.html b/index.html index e794c764..216f8de4 100644 --- a/index.html +++ b/index.html @@ -36,7 +36,7 @@
      vtl
      - 0.2.0 3db7b44 + 0.2.0 3f725fe
      -
      -

      README

      +
      +

      README #

      The V Tensor Library

      vlang.io | Docs | Tutorials | Changelog | Contributing

      [![Mentioned in Awesome V][awesomevbadge]][awesomevurl] [![Continuous Integration][workflowbadge]][workflowurl] [![Deploy Documentation][deploydocsbadge]][deploydocsurl] [![License: MIT][licensebadge]][licenseurl]

      import vtl
       t := vtl.from_array([1.0, 2, 3, 4], [2, 2])!
       t.get([1, 1])
      -// 4.0

      VTL Provides

      • An n-dimensional Tensor data structure
      • Sophisticated reduction, elementwise, and accumulation operations
      • Data Structures that can easily be passed to C libraries
      • Powerful linear algebra routines backed by VSL.

      In the docs you can find more information about this module

      Installation

      Install dependencies (optional)

      We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl module will still function as normal.

      Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.

      Install VTL

      v install vtl
      -

      Done. Installation completed.

      Testing

      To test the module, just type the following command:

      v test .
      -

      License

      MIT

      Contributors

      This work was originally based on the work done by > Christopher (christopherzimmerman).

      The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.

      Made with contributors-img.

      [awesomevbadge]: https://awesome.re/mentioned-badge.svg [workflowbadge]: https://github.com/vlang/vtl/actions/workflows/ci.yml/badge.svg [deploydocsbadge]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml/badge.svg [licensebadge]: https://img.shields.io/badge/License-MIT-blue.svg [awesomevurl]: https://github.com/vlang/awesome-v/blob/master/README.md#scientific-computing [workflowurl]: https://github.com/vlang/vtl/actions/workflows/ci.yml [deploydocsurl]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml [licenseurl]: https://github.com/vlang/vtl/blob/main/LICENSE

      +// 4.0

      VTL Provides

      • An n-dimensional Tensor data structure
      • Sophisticated reduction, elementwise, and accumulation operations
      • Data Structures that can easily be passed to C libraries
      • Powerful linear algebra routines backed by VSL.

      In the docs you can find more information about this module

      Installation

      Install dependencies (optional)

      We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl module will still function as normal.

      Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.

      Install VTL

      v install vtl

      Done. Installation completed.

      Testing

      To test the module, just type the following command:

      v test .

      License

      MIT

      Contributors

      This work was originally based on the work done by > Christopher (christopherzimmerman).

      The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.

      Made with contributors-img.

      [awesomevbadge]: https://awesome.re/mentioned-badge.svg [workflowbadge]: https://github.com/vlang/vtl/actions/workflows/ci.yml/badge.svg [deploydocsbadge]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml/badge.svg [licensebadge]: https://img.shields.io/badge/License-MIT-blue.svg [awesomevurl]: https://github.com/vlang/awesome-v/blob/master/README.md#scientific-computing [workflowurl]: https://github.com/vlang/vtl/actions/workflows/ci.yml [deploydocsurl]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml [licenseurl]: https://github.com/vlang/vtl/blob/main/LICENSE

      - +
      diff --git a/la.html b/la.html index b7ae4c56..d0b99f30 100644 --- a/la.html +++ b/la.html @@ -36,7 +36,7 @@
      vtl
      - 0.2.0 3db7b44 + 0.2.0 3f725fe
      - +
      • fn det
        diff --git a/ml.metrics.html b/ml.metrics.html index 1c814718..7c171723 100644 --- a/ml.metrics.html +++ b/ml.metrics.html @@ -36,7 +36,7 @@
        vtl
        - 0.2.0 3db7b44 + 0.2.0 3f725fe
        - +
        • fn absolute_error
          diff --git a/nn.gates.activation.html b/nn.gates.activation.html index c74543db..4e18fd09 100644 --- a/nn.gates.activation.html +++ b/nn.gates.activation.html @@ -36,7 +36,7 @@
          vtl
          - 0.2.0 3db7b44 + 0.2.0 3f725fe
          - +
          • fn elu_gate
            diff --git a/nn.gates.layers.html b/nn.gates.layers.html index 4fa43dce..8a0ef786 100644 --- a/nn.gates.layers.html +++ b/nn.gates.layers.html @@ -36,7 +36,7 @@
            vtl
            - 0.2.0 3db7b44 + 0.2.0 3f725fe
            - +
            • fn dropout_gate
              diff --git a/nn.gates.loss.html b/nn.gates.loss.html index c5c9c822..aae0fdd6 100644 --- a/nn.gates.loss.html +++ b/nn.gates.loss.html @@ -36,7 +36,7 @@
              vtl
              - 0.2.0 3db7b44 + 0.2.0 3f725fe
              - +
              • fn mse_gate
                diff --git a/nn.internal.html b/nn.internal.html index 935245b3..08f874a4 100644 --- a/nn.internal.html +++ b/nn.internal.html @@ -36,7 +36,7 @@
                vtl
                - 0.2.0 3db7b44 + 0.2.0 3f725fe
                - +
                • fn compute_fans
                  diff --git a/nn.layers.html b/nn.layers.html index bd0f35af..2ddf8f1c 100644 --- a/nn.layers.html +++ b/nn.layers.html @@ -36,7 +36,7 @@
                  vtl
                  - 0.2.0 3db7b44 + 0.2.0 3f725fe
                  - +
                  • fn dropout_layer
                    diff --git a/nn.loss.html b/nn.loss.html index 6e915070..c52f5298 100644 --- a/nn.loss.html +++ b/nn.loss.html @@ -36,7 +36,7 @@
                    vtl
                    - 0.2.0 3db7b44 + 0.2.0 3f725fe
                    - +
                    • fn loss_loss
                      diff --git a/nn.models.html b/nn.models.html index e1cc1458..7a680c00 100644 --- a/nn.models.html +++ b/nn.models.html @@ -36,7 +36,7 @@
                      vtl
                      - 0.2.0 3db7b44 + 0.2.0 3f725fe
                      -
                      -

                      fn (Sequential[T]) input #

                      +
                      +

                      fn (SequentialInfo[T]) input #

                      -fn (mut nn Sequential[T]) input(shape []int)
                      +fn (mut ls SequentialInfo[T]) input(shape []int)

                      input adds a new input layer to the network with the given shape.

                      -
                      -

                      fn (Sequential[T]) linear #

                      +
                      +

                      fn (SequentialInfo[T]) linear #

                      -fn (mut nn Sequential[T]) linear(output_size int)
                      +fn (mut ls SequentialInfo[T]) linear(output_size int)

                      linear adds a new linear layer to the network with the given output size

                      -
                      -

                      fn (Sequential[T]) maxpool2d #

                      +
                      +

                      fn (SequentialInfo[T]) maxpool2d #

                      -fn (mut nn Sequential[T]) maxpool2d(kernel []int, padding []int, stride []int)
                      +fn (mut ls SequentialInfo[T]) maxpool2d(kernel []int, padding []int, stride []int)

                      maxpool2d adds a new maxpool2d layer to the network with the given kernel size and stride.

                      -
                      -

                      fn (Sequential[T]) mse_loss #

                      +
                      +

                      fn (SequentialInfo[T]) mse_loss #

                      -fn (mut nn Sequential[T]) mse_loss()
                      +fn (mut ls SequentialInfo[T]) mse_loss()

                      mse_loss sets the loss function to the mean squared error loss.

                      -
                      -

                      fn (Sequential[T]) sigmoid_cross_entropy_loss #

                      +
                      +

                      fn (SequentialInfo[T]) sigmoid_cross_entropy_loss #

                      -fn (mut nn Sequential[T]) sigmoid_cross_entropy_loss()
                      +fn (mut ls SequentialInfo[T]) sigmoid_cross_entropy_loss()

                      sigmoid_cross_entropy_loss sets the loss function to the sigmoid cross entropy loss.

                      -
                      -

                      fn (Sequential[T]) softmax_cross_entropy_loss #

                      +
                      +

                      fn (SequentialInfo[T]) softmax_cross_entropy_loss #

                      -fn (mut nn Sequential[T]) softmax_cross_entropy_loss()
                      +fn (mut ls SequentialInfo[T]) softmax_cross_entropy_loss()

                      softmax_cross_entropy_loss sets the loss function to the softmax cross entropy loss.

                      -
                      -

                      fn (Sequential[T]) flatten #

                      +
                      +

                      fn (SequentialInfo[T]) flatten #

                      -fn (mut nn Sequential[T]) flatten()
                      +fn (mut ls SequentialInfo[T]) flatten()

                      flatten adds a new flatten layer to the network.

                      -
                      -

                      fn (Sequential[T]) relu #

                      +
                      +

                      fn (SequentialInfo[T]) relu #

                      -fn (mut nn Sequential[T]) relu()
                      +fn (mut ls SequentialInfo[T]) relu()

                      relu adds a new relu layer to the network.

                      -
                      -

                      fn (Sequential[T]) leaky_relu #

                      +
                      +

                      fn (SequentialInfo[T]) leaky_relu #

                      -fn (mut nn Sequential[T]) leaky_relu()
                      +fn (mut ls SequentialInfo[T]) leaky_relu()

                      leaky_relu adds a new leaky_relu layer to the network.

                      -
                      -

                      fn (Sequential[T]) elu #

                      +
                      +

                      fn (SequentialInfo[T]) elu #

                      -fn (mut nn Sequential[T]) elu()
                      +fn (mut ls SequentialInfo[T]) elu()

                      elu adds a new elu layer to the network.

                      -
                      -

                      fn (Sequential[T]) sigmod #

                      +
                      +

                      fn (SequentialInfo[T]) sigmod #

                      -fn (mut nn Sequential[T]) sigmod()
                      +fn (mut ls SequentialInfo[T]) sigmod()

                      sigmod adds a new sigmod layer to the network.

                      -
                      -

                      fn (Sequential[T]) forward #

                      -
                      -fn (mut nn Sequential[T]) forward(mut train autograd.Variable[T]) !&autograd.Variable[T]
                      - - -
                      - -
                      -

                      fn (Sequential[T]) loss #

                      -
                      -fn (mut nn Sequential[T]) loss(output &autograd.Variable[T], target &vtl.Tensor[T]) !&autograd.Variable[T]
                      - - -
                      - -
                      -

                      fn (SequentialInfo[T]) input #

                      +
                      +

                      fn (Sequential[T]) input #

                      -fn (mut ls SequentialInfo[T]) input(shape []int)
                      +fn (mut nn Sequential[T]) input(shape []int)

                      input adds a new input layer to the network with the given shape.

                      -
                      -

                      fn (SequentialInfo[T]) linear #

                      +
                      +

                      fn (Sequential[T]) linear #

                      -fn (mut ls SequentialInfo[T]) linear(output_size int)
                      +fn (mut nn Sequential[T]) linear(output_size int)

                      linear adds a new linear layer to the network with the given output size

                      -
                      -

                      fn (SequentialInfo[T]) maxpool2d #

                      +
                      +

                      fn (Sequential[T]) maxpool2d #

                      -fn (mut ls SequentialInfo[T]) maxpool2d(kernel []int, padding []int, stride []int)
                      +fn (mut nn Sequential[T]) maxpool2d(kernel []int, padding []int, stride []int)

                      maxpool2d adds a new maxpool2d layer to the network with the given kernel size and stride.

                      -
                      -

                      fn (SequentialInfo[T]) mse_loss #

                      +
                      +

                      fn (Sequential[T]) mse_loss #

                      -fn (mut ls SequentialInfo[T]) mse_loss()
                      +fn (mut nn Sequential[T]) mse_loss()

                      mse_loss sets the loss function to the mean squared error loss.

                      -
                      -

                      fn (SequentialInfo[T]) sigmoid_cross_entropy_loss #

                      +
                      +

                      fn (Sequential[T]) sigmoid_cross_entropy_loss #

                      -fn (mut ls SequentialInfo[T]) sigmoid_cross_entropy_loss()
                      +fn (mut nn Sequential[T]) sigmoid_cross_entropy_loss()

                      sigmoid_cross_entropy_loss sets the loss function to the sigmoid cross entropy loss.

                      -
                      -

                      fn (SequentialInfo[T]) softmax_cross_entropy_loss #

                      +
                      +

                      fn (Sequential[T]) softmax_cross_entropy_loss #

                      -fn (mut ls SequentialInfo[T]) softmax_cross_entropy_loss()
                      +fn (mut nn Sequential[T]) softmax_cross_entropy_loss()

                      softmax_cross_entropy_loss sets the loss function to the softmax cross entropy loss.

                      -
                      -

                      fn (SequentialInfo[T]) flatten #

                      +
                      +

                      fn (Sequential[T]) flatten #

                      -fn (mut ls SequentialInfo[T]) flatten()
                      +fn (mut nn Sequential[T]) flatten()

                      flatten adds a new flatten layer to the network.

                      -
                      -

                      fn (SequentialInfo[T]) relu #

                      +
                      +

                      fn (Sequential[T]) relu #

                      -fn (mut ls SequentialInfo[T]) relu()
                      +fn (mut nn Sequential[T]) relu()

                      relu adds a new relu layer to the network.

                      -
                      -

                      fn (SequentialInfo[T]) leaky_relu #

                      +
                      +

                      fn (Sequential[T]) leaky_relu #

                      -fn (mut ls SequentialInfo[T]) leaky_relu()
                      +fn (mut nn Sequential[T]) leaky_relu()

                      leaky_relu adds a new leaky_relu layer to the network.

                      -
                      -

                      fn (SequentialInfo[T]) elu #

                      +
                      +

                      fn (Sequential[T]) elu #

                      -fn (mut ls SequentialInfo[T]) elu()
                      +fn (mut nn Sequential[T]) elu()

                      elu adds a new elu layer to the network.

                      -
                      -

                      fn (SequentialInfo[T]) sigmod #

                      +
                      +

                      fn (Sequential[T]) sigmod #

                      -fn (mut ls SequentialInfo[T]) sigmod()
                      +fn (mut nn Sequential[T]) sigmod()

                      sigmod adds a new sigmod layer to the network.

                      +
                      + +
                      +

                      fn (Sequential[T]) forward #

                      +
                      +fn (mut nn Sequential[T]) forward(mut train autograd.Variable[T]) !&autograd.Variable[T]
                      + + +
                      + +
                      +

                      fn (Sequential[T]) loss #

                      +
                      +fn (mut nn Sequential[T]) loss(output &autograd.Variable[T], target &vtl.Tensor[T]) !&autograd.Variable[T]
                      + +
                      @@ -370,7 +370,7 @@
                      - +
                      • fn sequential
                        @@ -387,6 +387,20 @@
                      • fn sequential_with_layers
                      • +
                      • type SequentialInfo[T] +
                      • type Sequential[T]
                      • -
                      • type SequentialInfo[T] -
                      • struct Sequential
                      • diff --git a/nn.optimizers.html b/nn.optimizers.html index c21c3087..a0cc7fad 100644 --- a/nn.optimizers.html +++ b/nn.optimizers.html @@ -36,7 +36,7 @@
                        vtl
                        - 0.2.0 3db7b44 + 0.2.0 3f725fe
                        - +
                        • fn adam_optimizer
                          diff --git a/nn.types.html b/nn.types.html index e0635387..02247b0f 100644 --- a/nn.types.html +++ b/nn.types.html @@ -36,7 +36,7 @@
                          vtl
                          - 0.2.0 3db7b44 + 0.2.0 3f725fe
                          - +
                          • interface Layer
                            diff --git a/search_index.js b/search_index.js index 9332e196..14a2663c 100644 --- a/search_index.js +++ b/search_index.js @@ -604,7 +604,7 @@ var searchIndex = [ var searchModuleData = [ ["

                            vtl
                            - 0.2.0 3db7b44 + 0.2.0 3f725fe
                            - +
                            • fn absdev
                              diff --git a/storage.html b/storage.html index 2e6137d2..7d8f7a19 100644 --- a/storage.html +++ b/storage.html @@ -36,7 +36,7 @@
                              vtl
                              - 0.2.0 3db7b44 + 0.2.0 3f725fe
                              - +
                              • Constants
                              • fn from_array
                                  diff --git a/vtl.html b/vtl.html index 3ab0e3f7..615dc400 100644 --- a/vtl.html +++ b/vtl.html @@ -36,7 +36,7 @@
                                  vtl
                                  - 0.2.0 3db7b44 + 0.2.0 3f725fe

                                  vtl #

                                  -

                                  -

                                  -

                                  The V Tensor Library

                                  -

                                  vlang.io | Docs |Tutorials | Changelog |Contributing

                                  +

                                  The V Tensor Library

                                  +

                                  vlang.io | Docs | Tutorials | Changelog | Contributing

                                  import vtl
                                   t := vtl.from_array([1.0, 2, 3, 4], [2, 2])!
                                   t.get([1, 1])
                                  -// 4.0

                                  VTL Provides

                                  • An n-dimensional Tensor data structure - Sophisticated reduction, elementwise, and accumulation operations
                                  • Data Structures that can easily be passed to C libraries - Powerful linear algebra routines backed by VSL that uses LAPACKE and OpenBLAS.

                                  In the docs you can find more information about this module

                                  Installation

                                  Install dependencies (optional)

                                  We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module.If you wish you to use vtl without these, the vtl module will still function as normal.

                                  Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.

                                  Install VTL

                                  v install vtl
                                  -

                                  Done. Installation completed.

                                  Testing

                                  To test the module, just type the following command:

                                  v test .
                                  -

                                  License

                                  MIT

                                  Contributors

                                  This work was originally based on the work done by > Christopher (christopherzimmerman).

                                  The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizingthe work and inspiration that the library done by Christopher has given.

                                  Made with contributors-img.

                                  +// 4.0

                                  VTL Provides

                                  • An n-dimensional Tensor data structure
                                  • Sophisticated reduction, elementwise, and accumulation operations
                                  • Data Structures that can easily be passed to C libraries
                                  • Powerful linear algebra routines backed by VSL that uses LAPACKE and OpenBLAS.

                                  In the docs you can find more information about this module

                                  Installation

                                  Install dependencies (optional)

                                  We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl module will still function as normal.

                                  Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.

                                  Install VTL

                                  v install vtl

                                  Done. Installation completed.

                                  Testing

                                  To test the module, just type the following command:

                                  v test .

                                  License

                                  MIT

                                  Contributors

                                  This work was originally based on the work done by > Christopher (christopherzimmerman).

                                  The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.

                                  Made with contributors-img.

                                  @@ -428,6 +422,126 @@

                                  The V Tensor Library

                                  }

                                  AnyTensor is an interface that allows for any tensor to be used in the vtl library

                                  + + +
                                  +

                                  fn (TensorAxisIterator[T]) next #

                                  +
                                  +fn (mut s TensorAxisIterator[T]) next[T]() ?(T, []int)
                                  +

                                  next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value

                                  + +
                                  + +
                                  +

                                  type TensorDataType #

                                  +
                                  +type TensorDataType = bool | f32 | f64 | i16 | i64 | i8 | int | string | u16 | u32 | u64 | u8
                                  +

                                  TensorDataType is a sum type that lists the possible types to be used to define storage

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) string #

                                  +
                                  +fn (v TensorDataType) string() string
                                  +

                                  string returns TensorDataType as a string.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) int #

                                  +
                                  +fn (v TensorDataType) int() int
                                  +

                                  int uses TensorDataType as an integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) i64 #

                                  +
                                  +fn (v TensorDataType) i64() i64
                                  +

                                  i64 uses TensorDataType as a 64-bit integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) i8 #

                                  +
                                  +fn (v TensorDataType) i8() i8
                                  +

                                  i8 uses TensorDataType as a 8-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) i16 #

                                  +
                                  +fn (v TensorDataType) i16() i16
                                  +

                                  i16 uses TensorDataType as a 16-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) u8 #

                                  +
                                  +fn (v TensorDataType) u8() u8
                                  +

                                  u8 uses TensorDataType as a 8-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) u16 #

                                  +
                                  +fn (v TensorDataType) u16() u16
                                  +

                                  u16 uses TensorDataType as a 16-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) u32 #

                                  +
                                  +fn (v TensorDataType) u32() u32
                                  +

                                  u32 uses TensorDataType as a 32-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) u64 #

                                  +
                                  +fn (v TensorDataType) u64() u64
                                  +

                                  u64 uses TensorDataType as a 64-bit unsigned integer.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) f32 #

                                  +
                                  +fn (v TensorDataType) f32() f32
                                  +

                                  f32 uses TensorDataType as a 32-bit float.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) f64 #

                                  +
                                  +fn (v TensorDataType) f64() f64
                                  +

                                  f64 uses TensorDataType as a float.

                                  + +
                                  + +
                                  +

                                  fn (TensorDataType) bool #

                                  +
                                  +fn (v TensorDataType) bool() bool
                                  +

                                  bool uses TensorDataType as a bool

                                  + +
                                  + +
                                  +

                                  fn (TensorIterator[T]) next #

                                  +
                                  +fn (mut s TensorIterator[T]) next[T]() ?(T, []int)
                                  +

                                  next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value

                                  +
                                  @@ -1652,126 +1766,6 @@

                                  The V Tensor Library

                                  fn (t &Tensor[T]) with_dims[T](n int) !&Tensor[T]

                                  with_dims returns a new Tensor adding dimensions so that it has at least n dimensions

                                  -
                                  - -
                                  -

                                  fn (TensorAxisIterator[T]) next #

                                  -
                                  -fn (mut s TensorAxisIterator[T]) next[T]() ?(T, []int)
                                  -

                                  next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value

                                  - -
                                  - -
                                  -

                                  type TensorDataType #

                                  -
                                  -type TensorDataType = bool | f32 | f64 | i16 | i64 | i8 | int | string | u16 | u32 | u64 | u8
                                  -

                                  TensorDataType is a sum type that lists the possible types to be used to define storage

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) string #

                                  -
                                  -fn (v TensorDataType) string() string
                                  -

                                  string returns TensorDataType as a string.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) int #

                                  -
                                  -fn (v TensorDataType) int() int
                                  -

                                  int uses TensorDataType as an integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) i64 #

                                  -
                                  -fn (v TensorDataType) i64() i64
                                  -

                                  i64 uses TensorDataType as a 64-bit integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) i8 #

                                  -
                                  -fn (v TensorDataType) i8() i8
                                  -

                                  i8 uses TensorDataType as a 8-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) i16 #

                                  -
                                  -fn (v TensorDataType) i16() i16
                                  -

                                  i16 uses TensorDataType as a 16-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) u8 #

                                  -
                                  -fn (v TensorDataType) u8() u8
                                  -

                                  u8 uses TensorDataType as a 8-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) u16 #

                                  -
                                  -fn (v TensorDataType) u16() u16
                                  -

                                  u16 uses TensorDataType as a 16-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) u32 #

                                  -
                                  -fn (v TensorDataType) u32() u32
                                  -

                                  u32 uses TensorDataType as a 32-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) u64 #

                                  -
                                  -fn (v TensorDataType) u64() u64
                                  -

                                  u64 uses TensorDataType as a 64-bit unsigned integer.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) f32 #

                                  -
                                  -fn (v TensorDataType) f32() f32
                                  -

                                  f32 uses TensorDataType as a 32-bit float.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) f64 #

                                  -
                                  -fn (v TensorDataType) f64() f64
                                  -

                                  f64 uses TensorDataType as a float.

                                  - -
                                  - -
                                  -

                                  fn (TensorDataType) bool #

                                  -
                                  -fn (v TensorDataType) bool() bool
                                  -

                                  bool uses TensorDataType as a bool

                                  - -
                                  - -
                                  -

                                  fn (TensorIterator[T]) next #

                                  -
                                  -fn (mut s TensorIterator[T]) next[T]() ?(T, []int)
                                  -

                                  next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value

                                  -
                                  @@ -1922,7 +1916,7 @@

                                  The V Tensor Library

                                  - +