You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What we have now is the touch-related gestures (rub and brush/swipe) extracted from a capacitive touch array of either discrete (0 or 1) or continuous normalized values (floats between 0 and 1). The descriptor was made for the T-Stick which has one of those capacitive sensing options.
This descriptor uses a simple 1 DoF blob detection algorithm that scans the array and creates blobs by reading the sequence of activated stripes (array positions). This allows us to have multiple versions of those gestures as it keeps scanning the array and finding all blobs, their sizes, and position (mean point). The rub and brush gestures integrate each blob's position to get their "speed" in a value close to their speed in cm/s. That was fine-tuned based on the distance between stripes and the free values available for the leaky integration (i.e., there's room for improvement).
What I suggest is to separate blob detection and the brush/rub classes should use a single float to calculate.
We can make classes for multi-brush and multi-rub that rely on blob detection for the specific cases of a touch array
The
blobDetection1D
logic should be independant from the Touch classblobDetection1D
from the Touch classutils.h
as a reusable functionblobDetection1D
fromutils.h
The text was updated successfully, but these errors were encountered: