(Legacy software - not maintained anymore due to many breaking changes in dependencies)
Indexed Database reactive (Rx) wrapper written in [scala.js]1 using [monix]2, [uPickle]3 and [uTest]4.
- Scala.js version : 0.6.19
resolvers in ThisBuild ++= Seq(
Resolver.bintrayRepo("pragmaxim", "maven")
)
libraryDependencies ++= Seq("com.pragmaxim" %%% "scalajs-rx-idb" % "0.0.9")
Primarily it is trying to be :
- thanks to [uPickle]3 and its Reader/Writer type classes user just declares input/return types and uPickle does the rest. It even allows you to deserialize objects in a type safe manner without knowing what type of object you're gonna get from database (See uPickle's tagged values in sealed hierarchies)
- a key validation type class doesn't let you store keys of unsupported types
- there is an abstraction over CRUD operations allowing seamlessly work with both scala collections and idb key ranges over store or index
- there is too much mutability and confusion regarding request result value, versioning, transactions and error handling in IndexedDb API
- it should prevent lock starvation which is a common problem of indexeddb
- it should supervise transaction boundaries. There are a few edge cases though I haven't covered yet, I asked a question on SO
- Rx based API has a clean contract by definition
- IndexedDb API imho leads to inevitable callback hell and I couldn't really say when it crashes and why
- it makes it easier to implement new features like profiling
- you get a full control over returned data streams in form of higher-order functions
- thanks to Monifu's back pressure implementation you get a way to asynchronously processing results requested lazily with the possibility to cancel.
In other words, doing complicated stuff with IndexedDb directly is not that easy as one might expect. I came to conclusion that IndexedDb is rather a db engine that is meant to be used by Databases built around it
NOTE
- Just the main operations are tested so far, it's a work in progress, there is no time to test edge cases
- The performance might get a little worse in comparison with direct IDB access
- But after you spend some time with IDB you'll know that loosing a few milliseconds is always better than lock starvation that might put the entire application down or waste hours of troubleshooting
- This library is suitable for bulk operations rather than requests targeting one record. That's why all methods are passed either Iterable or KeyRange
Struggles
- There was a significant resistance doing abstraction over a javascript application that itself doesn't even have a common interface for IDBObjectStore and IDBIndex even though both being a Store, sharing common interface
- I was fighting scalac a lot
- [Scalac type inference sometimes really surprises (unpleasantly)]5 - I had to combine path dependent types with type classes which is always a bad thing (I'll personally never do it again)
- [Scalac doesn't look into descendants of associated types]6 - which also made my day
- [After I rewrote API to path dependent style to spare user from having to explicitly specify types all the time]7 - I found out that I should have probably rewritten the entire application from scratch because due to these unexpected issues the interface turned out to be a little less self explanatory than I intended it to be, I'd like to rewrite it a little in future
There is lot to abstract over in regards to querying IDB, especially key autogeneration, key being on value's keypath, KeyRanges, Indexes, operations on Last and First record etc. I could use scala Marcos to generate the API based on DB Schema, but unfortunately I decided not to, there are just 4 methods that basically do everything based on type of input.
// v - either store Value OR (Key,Value) type v - type class abstracting over the possibility of key being on value keypath, autogenerated or explicitly specified
def add[I, C[X] <: Iterable[X]](values: C[I])(implicit p: StoreKeyPolicy[I], tx: Tx[C]): Observable[(K,V)]
// ^ - type constructor of any type that is iterable ^ - type class for ad-hoc polymorphism regarding transaction handling
// v - type constructor that might be either an Iterable or KeyRange of Keys
def get[C[_]](keys: C[K])(implicit e: Tx[C]): Observable[(K,V)]
// ^ - type class allows you to add a custom logic for the request, there is just an evidence for Iterable and KeyRange
// v - usually an observable of Key Value pairs is returned, delete just completes
def delete[C[_]](keys: C[K])(implicit e: Tx[C]): Observable[Nothing]
// update works similar to add except it supports KeyRange - beware you must supply KeyRange entries
def update[I, C[_]](input: C[I])(implicit p: StoreKeyPolicy[I], e: Tx[C]): Observable[(K,V)]
-
The best place to look at examples is IndexedDbSuite
-
Note that the crud operations accept either anything that is
Iterable
or anycom.pragmaxim.idb.Store.Key
-
working with iterables
val obj1 = Map("x" -> 0) // store values might be anything that upickle manages to serialize
val obj2 = Map("y" -> 1)
val db = IndexedDb( // you may create new db, open, upgrade or recreate existing one
new NewDb("dbName", db => db.createObjectStore("storeName", lit("autoIncrement" -> true)))
)
val store = db.openStore[Int,Map[String, Int]]("storeName") //declare Store's key and value type information
// db requests should be combined with `onCompleteNewTx` combinator which honors idb transaction boundaries
store.add(List(obj1, obj2)).onCompleteNewTx { appendTuples =>
assert(appendTuples.length == 2)
val (keys, values) = appendTuples.unzip
assert(values.head == Map("x" -> 0))
store.get(keys).onCompleteNewTx { getTuples =>
val (keys2, _) = getTuples.unzip
store.delete(keys2).onCompleteNewTx { empty =>
store.count.onCompleteNewTx { counts =>
assert(counts(0) == 0)
db.close()
}
}
}
}
- working with key ranges
val store = db.openStore[Int, Int](storeName)
store.add(1 to 10).onCompleteNewTx { tuples =>
store.delete(store.lastKey).onCompleteNewTx { empty =>
store.count.map { count =>
assert(count == 9)
}
store.delete(store.firstKey).onCompleteNewTx { empty =>
store.count.map { count =>
assert(count == 8)
}
store.delete(store.rangedKey(IDBKeyRange.bound(3,5), Direction.Prev)).onCompleteNewTx { empty =>
store.count.map { count =>
assert(count == 5)
}
db.close()
}
}
}
}
- working with Index
val db = IndexedDb(recreateDB(dbName))
val store = db.openStore[Int,AnInstance](dbName)
val index = store.index[String]("testIndex")
store.add(List(obj)).onCompleteNewTx { appendTuples =>
index.get(List("index")).onCompleteNewTx { tuples =>
db.close()
}
}