diff --git a/autograd.html b/autograd.html index e313abf5..045f711a 100644 --- a/autograd.html +++ b/autograd.html @@ -36,7 +36,7 @@
This module exposes some functionalities to download and use the VTL datasets. For this we created some batch based iterators to load the datasets.We expose the following datasets:
This module exposes some functionalities to download and use the VTL datasets. For this we created some batch based iterators to load the datasets. We expose the following datasets:
This module exposes some functionalities to download and
-
+const imdb_folder_name = 'aclImdb'
const mnist_test_labels_file = 't10k-labels-idx1-ubyte.gz'
-
+const imdb_file_name ='${imdb_folder_name}_v1.tar.gz'
const mnist_test_images_file = 't10k-images-idx3-ubyte.gz'
-
+const imdb_base_url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
const mnist_train_labels_file = 'train-labels-idx1-ubyte.gz'
-
+const mnist_base_url = 'http://yann.lecun.com/exdb/mnist/'
const mnist_train_images_file = 'train-images-idx3-ubyte.gz'
-
+const mnist_train_images_file = 'train-images-idx3-ubyte.gz'
const mnist_base_url = 'http://yann.lecun.com/exdb/mnist/'
-
+const mnist_train_labels_file = 'train-labels-idx1-ubyte.gz'
const imdb_base_url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
-
+const mnist_test_images_file = 't10k-images-idx3-ubyte.gz'
const imdb_file_name ='${imdb_folder_name}_v1.tar.gz'
-
+const mnist_test_labels_file = 't10k-labels-idx1-ubyte.gz'
const imdb_folder_name = 'aclImdb'
This module exposes some functionalities to download and -
+[![Mentioned in Awesome V][awesomevbadge]][awesomevurl] [![Continuous Integration][workflowbadge]][workflowurl] [![Deploy Documentation][deploydocsbadge]][deploydocsurl] [![License: MIT][licensebadge]][licenseurl]
import vtl
t := vtl.from_array([1.0, 2, 3, 4], [2, 2])!
t.get([1, 1])
-// 4.0
Tensor
data structureIn the docs you can find more information about this module
We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl
module will still function as normal.
Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.
v install vtl
-
Done. Installation completed.
To test the module, just type the following command:
v test .
-
This work was originally based on the work done by > Christopher (christopherzimmerman).
The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.
Made with contributors-img.
[awesomevbadge]: https://awesome.re/mentioned-badge.svg [workflowbadge]: https://github.com/vlang/vtl/actions/workflows/ci.yml/badge.svg [deploydocsbadge]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml/badge.svg [licensebadge]: https://img.shields.io/badge/License-MIT-blue.svg [awesomevurl]: https://github.com/vlang/awesome-v/blob/master/README.md#scientific-computing [workflowurl]: https://github.com/vlang/vtl/actions/workflows/ci.yml [deploydocsurl]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml [licenseurl]: https://github.com/vlang/vtl/blob/main/LICENSE
+// 4.0Tensor
data structureIn the docs you can find more information about this module
We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl
module will still function as normal.
Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.
v install vtl
Done. Installation completed.
To test the module, just type the following command:
v test .
This work was originally based on the work done by > Christopher (christopherzimmerman).
The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.
Made with contributors-img.
[awesomevbadge]: https://awesome.re/mentioned-badge.svg [workflowbadge]: https://github.com/vlang/vtl/actions/workflows/ci.yml/badge.svg [deploydocsbadge]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml/badge.svg [licensebadge]: https://img.shields.io/badge/License-MIT-blue.svg [awesomevurl]: https://github.com/vlang/awesome-v/blob/master/README.md#scientific-computing [workflowurl]: https://github.com/vlang/vtl/actions/workflows/ci.yml [deploydocsurl]: https://github.com/vlang/vtl/actions/workflows/deploy-docs.yml [licenseurl]: https://github.com/vlang/vtl/blob/main/LICENSE
-fn (mut nn Sequential[T]) input(shape []int)
+fn (mut ls SequentialInfo[T]) input(shape []int)
input adds a new input layer to the network with the given shape.
-fn (mut nn Sequential[T]) linear(output_size int)
+fn (mut ls SequentialInfo[T]) linear(output_size int)
linear adds a new linear layer to the network with the given output size
-fn (mut nn Sequential[T]) maxpool2d(kernel []int, padding []int, stride []int)
+fn (mut ls SequentialInfo[T]) maxpool2d(kernel []int, padding []int, stride []int)
maxpool2d adds a new maxpool2d layer to the network with the given kernel size and stride.
-fn (mut nn Sequential[T]) mse_loss()
+fn (mut ls SequentialInfo[T]) mse_loss()
mse_loss sets the loss function to the mean squared error loss.
-fn (mut nn Sequential[T]) sigmoid_cross_entropy_loss()
+fn (mut ls SequentialInfo[T]) sigmoid_cross_entropy_loss()
sigmoid_cross_entropy_loss sets the loss function to the sigmoid cross entropy loss.
-fn (mut nn Sequential[T]) softmax_cross_entropy_loss()
+fn (mut ls SequentialInfo[T]) softmax_cross_entropy_loss()
softmax_cross_entropy_loss sets the loss function to the softmax cross entropy loss.
-fn (mut nn Sequential[T]) flatten()
+fn (mut ls SequentialInfo[T]) flatten()
flatten adds a new flatten layer to the network.
-fn (mut nn Sequential[T]) relu()
+fn (mut ls SequentialInfo[T]) relu()
relu adds a new relu layer to the network.
-fn (mut nn Sequential[T]) leaky_relu()
+fn (mut ls SequentialInfo[T]) leaky_relu()
leaky_relu adds a new leaky_relu layer to the network.
-fn (mut nn Sequential[T]) elu()
+fn (mut ls SequentialInfo[T]) elu()
elu adds a new elu layer to the network.
-fn (mut nn Sequential[T]) sigmod()
+fn (mut ls SequentialInfo[T]) sigmod()
sigmod adds a new sigmod layer to the network.
-fn (mut nn Sequential[T]) forward(mut train autograd.Variable[T]) !&autograd.Variable[T]
-
-
-
-fn (mut nn Sequential[T]) loss(output &autograd.Variable[T], target &vtl.Tensor[T]) !&autograd.Variable[T]
-
-
-
-fn (mut ls SequentialInfo[T]) input(shape []int)
+fn (mut nn Sequential[T]) input(shape []int)
input adds a new input layer to the network with the given shape.
-fn (mut ls SequentialInfo[T]) linear(output_size int)
+fn (mut nn Sequential[T]) linear(output_size int)
linear adds a new linear layer to the network with the given output size
-fn (mut ls SequentialInfo[T]) maxpool2d(kernel []int, padding []int, stride []int)
+fn (mut nn Sequential[T]) maxpool2d(kernel []int, padding []int, stride []int)
maxpool2d adds a new maxpool2d layer to the network with the given kernel size and stride.
-fn (mut ls SequentialInfo[T]) mse_loss()
+fn (mut nn Sequential[T]) mse_loss()
mse_loss sets the loss function to the mean squared error loss.
-fn (mut ls SequentialInfo[T]) sigmoid_cross_entropy_loss()
+fn (mut nn Sequential[T]) sigmoid_cross_entropy_loss()
sigmoid_cross_entropy_loss sets the loss function to the sigmoid cross entropy loss.
-fn (mut ls SequentialInfo[T]) softmax_cross_entropy_loss()
+fn (mut nn Sequential[T]) softmax_cross_entropy_loss()
softmax_cross_entropy_loss sets the loss function to the softmax cross entropy loss.
-fn (mut ls SequentialInfo[T]) flatten()
+fn (mut nn Sequential[T]) flatten()
flatten adds a new flatten layer to the network.
-fn (mut ls SequentialInfo[T]) relu()
+fn (mut nn Sequential[T]) relu()
relu adds a new relu layer to the network.
-fn (mut ls SequentialInfo[T]) leaky_relu()
+fn (mut nn Sequential[T]) leaky_relu()
leaky_relu adds a new leaky_relu layer to the network.
-fn (mut ls SequentialInfo[T]) elu()
+fn (mut nn Sequential[T]) elu()
elu adds a new elu layer to the network.
-fn (mut ls SequentialInfo[T]) sigmod()
+fn (mut nn Sequential[T]) sigmod()
sigmod adds a new sigmod layer to the network.
+
+fn (mut nn Sequential[T]) forward(mut train autograd.Variable[T]) !&autograd.Variable[T]
+
+
+
+fn (mut nn Sequential[T]) loss(output &autograd.Variable[T], target &vtl.Tensor[T]) !&autograd.Variable[T]
+
+
import vtl
t := vtl.from_array([1.0, 2, 3, 4], [2, 2])!
t.get([1, 1])
-// 4.0
Tensor
data structure - Sophisticated reduction, elementwise, and accumulation operationsIn the docs you can find more information about this module
We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module.If you wish you to use vtl without these, the vtl
module will still function as normal.
Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.
v install vtl
-
Done. Installation completed.
To test the module, just type the following command:
v test .
-
This work was originally based on the work done by > Christopher (christopherzimmerman).
The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizingthe work and inspiration that the library done by Christopher has given.
Made with contributors-img.
+// 4.0Tensor
data structureIn the docs you can find more information about this module
We use VSL as backend for some functionalities. VTL requires VSL's linear algebra module. If you wish you to use vtl without these, the vtl
module will still function as normal.
Follow this install instructions at VSL docs in order to install VSL with all needed dependencies.
v install vtl
Done. Installation completed.
To test the module, just type the following command:
v test .
This work was originally based on the work done by > Christopher (christopherzimmerman).
The development of this library continues its course after having reimplemented its core > and a large part of its interface. In the same way, we do not want to stop recognizing > the work and inspiration that the library done by Christopher has given.
Made with contributors-img.
AnyTensor is an interface that allows for any tensor to be used in the vtl library
+ + +
+fn (mut s TensorAxisIterator[T]) next[T]() ?(T, []int)
+next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value
+ +
+type TensorDataType = bool | f32 | f64 | i16 | i64 | i8 | int | string | u16 | u32 | u64 | u8
+TensorDataType is a sum type that lists the possible types to be used to define storage
+ +
+fn (v TensorDataType) string() string
+string returns TensorDataType
as a string.
+fn (v TensorDataType) int() int
+int uses TensorDataType
as an integer.
+fn (v TensorDataType) i64() i64
+i64 uses TensorDataType
as a 64-bit integer.
+fn (v TensorDataType) i8() i8
+i8 uses TensorDataType
as a 8-bit unsigned integer.
+fn (v TensorDataType) i16() i16
+i16 uses TensorDataType
as a 16-bit unsigned integer.
+fn (v TensorDataType) u8() u8
+u8 uses TensorDataType
as a 8-bit unsigned integer.
+fn (v TensorDataType) u16() u16
+u16 uses TensorDataType
as a 16-bit unsigned integer.
+fn (v TensorDataType) u32() u32
+u32 uses TensorDataType
as a 32-bit unsigned integer.
+fn (v TensorDataType) u64() u64
+u64 uses TensorDataType
as a 64-bit unsigned integer.
+fn (v TensorDataType) f32() f32
+f32 uses TensorDataType
as a 32-bit float.
+fn (v TensorDataType) f64() f64
+f64 uses TensorDataType
as a float.
+fn (v TensorDataType) bool() bool
+bool uses TensorDataType
as a bool
+fn (mut s TensorIterator[T]) next[T]() ?(T, []int)
+next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value
+fn (t &Tensor[T]) with_dims[T](n int) !&Tensor[T]
with_dims returns a new Tensor adding dimensions so that it has at least n
dimensions
-fn (mut s TensorAxisIterator[T]) next[T]() ?(T, []int)
-next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value
- -
-type TensorDataType = bool | f32 | f64 | i16 | i64 | i8 | int | string | u16 | u32 | u64 | u8
-TensorDataType is a sum type that lists the possible types to be used to define storage
- -
-fn (v TensorDataType) string() string
-string returns TensorDataType
as a string.
-fn (v TensorDataType) int() int
-int uses TensorDataType
as an integer.
-fn (v TensorDataType) i64() i64
-i64 uses TensorDataType
as a 64-bit integer.
-fn (v TensorDataType) i8() i8
-i8 uses TensorDataType
as a 8-bit unsigned integer.
-fn (v TensorDataType) i16() i16
-i16 uses TensorDataType
as a 16-bit unsigned integer.
-fn (v TensorDataType) u8() u8
-u8 uses TensorDataType
as a 8-bit unsigned integer.
-fn (v TensorDataType) u16() u16
-u16 uses TensorDataType
as a 16-bit unsigned integer.
-fn (v TensorDataType) u32() u32
-u32 uses TensorDataType
as a 32-bit unsigned integer.
-fn (v TensorDataType) u64() u64
-u64 uses TensorDataType
as a 64-bit unsigned integer.
-fn (v TensorDataType) f32() f32
-f32 uses TensorDataType
as a 32-bit float.
-fn (v TensorDataType) f64() f64
-f64 uses TensorDataType
as a float.
-fn (v TensorDataType) bool() bool
-bool uses TensorDataType
as a bool
-fn (mut s TensorIterator[T]) next[T]() ?(T, []int)
-next calls the iteration type for a given iterator which is either flat or strided and returns a Num containing the current value
-