Neural network layer/op support¶
Below is a list of all TFLite 'ops' (operations or neural network layer types) that are supported by the inference engine. The main data-type of the inference engine is quantized 8-bit integers ('INT8'), which should be used for best performance. Some supporting ops are also available in quantized 16-bit ('INT16'), non-quantized 32-bit integer ('INT32') or 32-bit floating-point ('FLOAT32') mode. These supporting ops are typically only used for simple index or shape computations.
For some ops below there are certain conditions for which a certain data-type is supported. This can depend on the op parameters for example. Those conditions are mentioned in the 'Notes' column and/or will be checked at run-time. More details for each TFLite op type can be found on the TFLite MLIR website.
We group the ops in these types to make the long table a bit easier to parse:
- NN layer: A neural network layer, typically only available in quantized INT8
- NN activation: A neural network activation function
- Math op: A mathematical support op, typically only available in FLOAT32
- Support op: Any other support operation, used e.g. for indexing or reshaping
TFlite op/layer name | Type | INT8 (quantized) | INT32 | FLOAT32 | Notes |
---|---|---|---|---|---|
Abs | Math op | ✔ | |||
Add | NN layer | ✔ | ✔ | ||
AddN | NN layer | ✔ | ✔ | ||
ArgMax | Support op | ✔ | ✔ | ✔ | Output is always INT32 |
ArgMin | Support op | ✔ | ✔ | ✔ | Output is always INT32 |
AssignVariable | Support op | ✔ | ✔ | ✔ | Supports any data-type |
AveragePool2D | NN layer | ✔ | |||
BatchToSpaceNd | Support op | ✔ | ✔ | ||
BroadcastArgs | Support op | ✔ | |||
BroadcastTo | Support op | ✔ | ✔ | ✔ | Supports any data-type |
CallOnce | Support op | ✔ | ✔ | ✔ | Supports any data-type |
Cast | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
Ceil | Math op | ✔ | |||
CircularBuffer | Support op | ✔ | |||
Concatenation | Support op | ✔ | |||
Conv2D | NN layer | ✔ | |||
Cos | Math op | ✔ | |||
CumSum | Support op | ✔ | ✔ | ||
DepthToSpace | Support op | ✔ | ✔ | ||
DepthwiseConv2D | NN layer | ✔ | |||
Dequantize | Support op | ✔ | Also supports quantized INT16 | ||
DetectionPostProcess | NN layer | ✔ | |||
Div | NN layer | ✔ | ✔ | ||
Elu | NN activation | ✔ | ✔ | ||
Equal | Support op | ✔ | ✔ | ✔ | Also supports bools and INT64 |
Exp | Math op | ✔ | |||
ExpandDims | Support op | ✔ | ✔ | ||
Fill | Support op | ✔ | ✔ | ✔ | |
Floor | Math op | ✔ | |||
FloorDiv | Math op | ✔ | |||
FloorMod | Math op | ✔ | |||
FullyConnected | NN layer | ✔ | |||
Gather | Support op | ✔ | ✔ | With INT32 coordinates | |
GatherNd | Support op | ✔ | ✔ | With INT32 coordinates | |
Greater | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
GreaterEqual | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
HardSwish | NN activation | ✔ | ✔ | ||
If | Support op | ✔ | ✔ | ✔ | Supports any data-type |
L2Normalization | NN layer | ✔ | ✔ | ||
L2Pool2D | NN layer | ✔ | |||
LeakyRelu | NN activation | ✔ | ✔ | Also supports quantized INT16 | |
Less | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
LessEqual | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
Log | Math op | ✔ | |||
LogicalAnd | Bool support | Boolean only | |||
LogicalNot | Bool support | Boolean only | |||
LogicalOr | Bool support | Boolean only | |||
Logistic | Math op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
Maximum | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
MaxPool2D | NN layer | ✔ | |||
MirrorPad | Support op | ✔ | ✔ | ||
Mean | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
Minimum | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
Mul | NN layer | ✔ | ✔ | ✔ | INT32 mode is quantized |
Neg | Math op | ✔ | |||
NotEqual | Support op | ✔ | ✔ | ✔ | Also supports bools and INT64 |
Pack | Support op | ✔ | ✔ | ✔ | Also supports INT64 |
Pad | Support op | ✔ | |||
PadV2 | Support op | ✔ | |||
Prelu | NN activation | ✔ | ✔ | ||
Quantize | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
ReadVariable | Support op | ✔ | ✔ | ✔ | Supports any data-type |
ReduceMax | Support op | ✔ | ✔ | ||
Relu | NN activation | ✔ | ✔ | ||
Relu6 | NN activation | ✔ | ✔ | ||
Reshape | Support op | ✔ | ✔ | ✔ | Also supports bools and INT64 |
ResizeBilinear | Support op | ✔ | ✔ | ||
ResizeNearestNeighbor | Support op | ✔ | ✔ | Also supports quantized INT16 | |
ReverseV2 | Support op | ✔ | ✔ | ✔ | Also supports bools and INT64 |
Round | Math op | ✔ | |||
Rsqrt | Math op | ✔ | |||
SelectV2 | Support op | ✔ | ✔ | Also supports quantized INT16 | |
Shape | Shape op | ✔ | ✔ | ||
Sin | Math op | ✔ | |||
Slice | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
Softmax | NN activation | ✔ | |||
SpaceToBatchNd | Support op | ✔ | ✔ | ||
SpaceToDepth | Support op | ✔ | ✔ | ||
Split | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
SplitV | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
SquaredDifference | Math op | ✔ | ✔ | ✔ | |
Squeeze | Support op | ✔ | ✔ | ✔ | Supports any data-type |
Sqrt | Math op | ✔ | |||
Square | Math op | ✔ | |||
StridedSlice | Support op | ✔ | ✔ | ✔ | Also supports quantized INT16 |
Sub | NN layer | ✔ | ✔ | Also supports quantized INT16 | |
Sum | NN layer | ✔ | ✔ | Also supports quantized INT16 | |
Svdf | NN layer | ✔ | ✔ | ||
Tanh | NN activation | ✔ | ✔ | Also supports quantized INT16 | |
TransposeConv | NN layer | ✔ | |||
Transpose | Support op | ✔ | ✔ | ||
Unpack | Support op | ✔ | ✔ | ✔ | |
UnidirectionalSequenceLSTM | NN layer | ✔ | ✔ | FLOAT32 is in hybrid mode | |
VarHandle | Support op | ✔ | ✔ | ✔ | Supports any data-type |
While | Support op | ✔ | ✔ | ✔ | Supports any data-type |
ZerosLike | Support op | ✔ | ✔ | ✔ | Also supports INT64 |