Skip to content

Commit b3872b1

Browse files
committed
Updating README, TODOs
1 parent 6b6017b commit b3872b1

File tree

5 files changed

+5609
-69
lines changed

5 files changed

+5609
-69
lines changed

README.md

Lines changed: 61 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@ libraryDependencies += "com.github.EmergentOrder" %% "onnx-scala-backends" % "0.
1313

1414
As of v0.1.0, artifacts are published to Sonatype OSS / Maven Central. For the latest, build and publish locally from master.
1515

16-
1716
### Full ONNX model inference quick start
1817
First, download the [model file](https://s3.amazonaws.com/onnx-model-zoo/squeezenet/squeezenet1.1/squeezenet1.1.onnx) for [SqueezeNet](https://en.wikipedia.org/wiki/SqueezeNet).
1918
You can use `get_models.sh`
@@ -44,20 +43,35 @@ val squeezenetBytes = Files.readAllBytes(Paths.get("squeezenet1.1.onnx"))
4443

4544
val squeezenet = new ORTModelBackend(squeezenetBytes)
4645

46+
val data = Array.fill(1*3*224*224){42f}
47+
val tensorDenotation: String & Singleton = "SomeTensorType"
4748
//In NCHW tensor image format
48-
val imageTens = Tensor(Array.fill(1*3*224*224){42f},"SomeTensorType","Batch" ##: "Channel" ##: "Height" ##: "Width" ##: TSNil,1 #: 3 #: 224 #: 224 #: SNil)
49+
val tensorShapeDenotation = "Batch" ##: "Channel" ##: "Height" ##: "Width" ##: TSNil
50+
val shape = 1 #: 3 #: 224 #: 224 #: SNil
51+
52+
val imageTens = Tensor(data,tensorDenotation,tensorShapeDenotation,shape)
53+
54+
//or as a shorthand if you aren't concerned with enforcing denotations
55+
val imageTensDefaultDenotations = Tensor(data,shape)
4956
```
5057

5158
Note that ONNX Tensor content is in row-major order.
5259

5360
```scala
5461
val out = squeezenet.fullModel[Float, "T","T" ##: TSNil,1 #: 1000 #: SNil](Tuple(imageTens))
55-
62+
// val out:
63+
// org.emergentorder.onnx.Tensors.Tensor[Float, ("T", "T" ##:
64+
// org.emergentorder.compiletime.TSNil
65+
// , 1 #: 1000 #: io.kjaer.compiletime.SNil)] = (Array(0.8230729,
66+
// ...
5667
//The output shape
5768
out.shape
69+
// val res0: Array[Int] = Array(1, 1000)
70+
5871

5972
//The highest probability (predicted) class
60-
out.data.indices.maxBy(out._1)
73+
out.data.indices.maxBy(out.data)
74+
// val res1: Int = 418
6175
```
6276

6377
Referring to the [ImageNet 1000 class labels](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), we see that the predicted class is "ballpoint pen".
@@ -77,16 +91,37 @@ Feel free to wrap your calls into it in a facade with typed inputs.
7791
You can call individual operators:
7892

7993
```scala
80-
val onnx = new ORTOperatorBackendAll()
81-
82-
val longTens = Tensor(Array.fill(1*3*224*224){42l},"SomeTensorType","Batch" ##: "Channel" ##: "Height" ##: "Width" ##: TSNil,1 #: 3 #: 224 #: 224 #: SNil)
83-
84-
onnx.AbsV6("abs", longTens)
94+
val onnxBackend = new ORTOperatorBackendAll()
95+
96+
val longTens = Tensor(Array.fill(1*3*224*224){-42l},tensorDenotation,tensorShapeDenotation,shape)
97+
// longTens:
98+
// org.emergentorder.onnx.Tensors.Tensor[Float, ("T", "T" ##:
99+
// org.emergentorder.compiletime.TSNil
100+
// , 1 #: 1000 #: io.kjaer.compiletime.SNil)] = (
101+
// Array(
102+
// -42L,
103+
// -42L,
104+
// ...
105+
106+
onnxBackend.AbsV6("abs", longTens)
107+
// res2:
108+
// org.emergentorder.onnx.Tensors.Tensor[Float, ("T", "T" ##:
109+
// org.emergentorder.compiletime.TSNil
110+
// , 1 #: 1000 #: io.kjaer.compiletime.SNil)] = (
111+
// Array(
112+
// 42L,
113+
// 42L,
114+
// ...
85115
```
86116

87117
Sqrt will fail to compile because it's not defined for Long:
88118
```scala
89-
onnx.SqrtV6("sqrt", longTens)
119+
onnxBackend.SqrtV6("sqrt", longTens)
120+
// ...
121+
//Required: org.emergentorder.onnx.Tensors.Tensor[T, (
122+
//...
123+
//where: T is a type variable with constraint <: org.emergentorder.onnx.Float16 | Float | Double
124+
90125
```
91126
Note that in real use backends should be closed to prevent native memory leaks.
92127

@@ -107,38 +142,29 @@ This API is expressed via traits, with version-named methods. For example, Abs,
107142

108143
```scala
109144
import scala.{specialized => sp}
110-
import spire.math.UByte
111-
import spire.math.UShort
112-
import spire.math.UInt
113-
import spire.math.ULong
114-
import spire.math.Numeric
145+
import spire.math._
115146
import spire.implicits._
116-
import scala.reflect.ClassTag
117147
import org.emergentorder.onnx._
118148

119149
trait AbsV6 extends Operator {
120150
def AbsV6[
121-
@sp T <: UByte | UShort | UInt | ULong | Byte | Short | Int | Long | Float16 | Float | Double: Numeric
122-
, Tt <: TensorTypeDenotation, Td <: TensorShapeDenotation, S <: Shape](name: String, X: Tensor[T, Tuple3[Tt, Td, S]])(using tt: ValueOf[Tt], td: TensorShapeDenotationOf[Td], s: ShapeOf[S]): Tensor[T, Tuple3[Tt, Td, S]] = {
151+
@sp T <: UByte | UShort | UInt |
152+
ULong | Byte | Short | Int |
153+
Long | Float16 | Float | Double: Numeric,
154+
Tt <: TensorTypeDenotation,
155+
Td <: TensorShapeDenotation,
156+
S <: Shape]
157+
(name: String, X: Tensor[T, Tuple3[Tt, Td, S]])
158+
(using tt: ValueOf[Tt],
159+
td: TensorShapeDenotationOf[Td],
160+
s: ShapeOf[S]): Tensor[T, Tuple3[Tt, Td, S]] = {
123161
val map: Map[String, Any] = Map()
124162
val allInputs = Tuple1(X)
125163
(callOp(name, "Abs", allInputs, map))
126164
}
127165
}
128166
```
129167

130-
A few more examples of the type constraints in action (fail to compile):
131-
```scala
132-
val stringTens = Tensor(Array.fill(1*3*224*224){"test"},"SomeTensorType","Batch" ##: "Channel" ##: "Height" ##: "Width" ##: TSNil,1 #: 3 #: 224 #: 224 #: SNil)
133-
onnx.AbsV6("abs", stringTens)
134-
```
135-
136-
```scala
137-
val aBigInt = new BigInt(new java.math.BigInteger("5"))
138-
val bigIntTens = Tensor(Array.fill(1*3*224*224){aBigInt},"SomeTensorType","Batch" ##: "Channel" ##: "Height" ##: "Width" ##: TSNil,1 #: 3 #: 224 #: 224 #: SNil)
139-
onnx.Abs6("abs", bigIntTens)
140-
```
141-
142168
Using this API, each ONNX operation is executed on the underyling backend individually.
143169
As a result, you can write your own models from scratch in Scala using ONNX-Scala operations, injecting parameters from outside sources as need be.
144170

@@ -162,6 +188,7 @@ Supported ONNX input and output tensor data types:
162188
* Long
163189
* Float
164190
* Double
191+
* Boolean
165192

166193
Supported ONNX ops:
167194

@@ -197,15 +224,15 @@ to build against Scala 2.13 and Dotty/3.0, where possible.
197224

198225
#### Core
199226

200-
* [ONNX](https://github.com/onnx/onnx) via [ScalaPB](https://github.com/scalapb/ScalaPB) - Open Neural Network Exchange / The missing bridge between Java and native C++ libraries (For access to Protobuf definitions and operator schemas)
227+
* [ONNX](https://github.com/onnx/onnx) via [ScalaPB](https://github.com/scalapb/ScalaPB) - Open Neural Network Exchange / The missing bridge between Java and native C++ libraries (For access to Protobuf definitions, used in the fine-grained API to create ONNX models in memory to send to the backend)
201228

202-
* [Spire](https://github.com/non/spire) - Typelevel project enabling generic numeric programming (For support for unsigned ints, complex numbers and the Numeric type class in the core API)
229+
* [Spire](https://github.com/typelevel/spire) - Powerful new number types and numeric abstractions for Scala. (For support for unsigned ints, complex numbers and the Numeric type class in the core API)
203230

204-
* [Dotty](https://github.com/lampepfl/dotty) - A next-generation compiler that will become Scala 3 (For native union types, formerly used here to express ONNX type constraints, but currently using cross-version source compatibile union types instead)
231+
* [Dotty](https://github.com/lampepfl/dotty) - The Scala 3 compiler, also known as Dotty. (For union types (used here to express ONNX type constraints), match types, compiletime singleton ops, ...)
205232

206233
#### Backends
207234

208-
* [ONNX Runtime via ORT Java API](https://github.com/microsoft/onnxruntime/tree/master/java) - ONNX Runtime: cross-platform, high performance scoring engine for ML models
235+
* [ONNX Runtime via ORT Java API](https://github.com/microsoft/onnxruntime/tree/master/java) - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
209236

210237
### Inspiration
211238

core/src/main/scala/ONNX.scala

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ import org.emergentorder.compiletime._
1919
import org.emergentorder.compiletime.TensorShapeDenotation.Reverse
2020
package object onnx {
2121

22+
//TODO: report Dotty compilation time explosion: ~5s -> 380s
2223
//TODO to consider: Use existing typeclasses here / in NDScala
2324
//TODO: Fix propagation behavavior for TensorShapeDenotation
2425
//TODO:Remaining typed axis semantics, JS support

0 commit comments

Comments
 (0)