How to formulate a loss function that minimizes both cross-entropy and an additional term?
|
|
4
|
503
|
July 11, 2023
|
None of the MLIR Optimization passes are enabled
|
|
14
|
42100
|
January 30, 2023
|
While doing quantization, Is it possible to specify the scale and zero point to tensorflow int8 kernel?
|
|
3
|
2167
|
January 30, 2023
|
Please share the pruning & fine-tuning configuration of MobileNet V2
|
|
1
|
706
|
October 26, 2022
|
Does not improve predictive performance --> ("pruning for on device inference")
|
|
0
|
521
|
September 22, 2022
|
Results of the pruning model
|
|
6
|
885
|
June 15, 2022
|
Weight tensors after applying Structural pruning
|
|
2
|
1158
|
June 2, 2022
|
Reducing XLA AOT compiled model size
|
|
4
|
1840
|
May 17, 2022
|
Pruned Model Stripping
|
|
4
|
2018
|
May 16, 2022
|
Structured vs Unstructured Pruning
|
|
3
|
1542
|
May 13, 2022
|
Fine-tune pre-trained model with pruning
|
|
1
|
1584
|
May 13, 2022
|
Customize structural pruning rate in each layer
|
|
4
|
1583
|
May 13, 2022
|
Model quantization aware training problem
|
|
1
|
1109
|
May 13, 2022
|
Quantization aware training -> In/Output still float32?
|
|
1
|
1590
|
May 13, 2022
|
How to use tensorflow model optimization for prune without any tf.keras support?
|
|
2
|
1544
|
May 9, 2022
|
Quantization aware training with quantizationConfig -> 4 % Accuracy loss
|
|
1
|
1676
|
April 25, 2022
|
Is it possible to use 8 bit integers instead of floating point numbers?
|
|
1
|
1099
|
April 20, 2022
|
Layers not pruned after using prune_low_magnitude
|
|
2
|
1427
|
February 14, 2022
|
Where can I find sysmetric TFlite quantization model?
|
|
1
|
1824
|
December 9, 2021
|
Transfer learning and Quantization aware training. Subclassed model
|
|
2
|
2992
|
December 3, 2021
|
4 bit quantization aware training
|
|
1
|
2064
|
October 19, 2021
|
Improvement of API: Measures to avoid ValueError when pruning low manitude
|
|
2
|
1395
|
September 8, 2021
|
How to convert a tensor to a numpy array without enabling the run_eagerly flag in keras
|
|
3
|
11743
|
September 6, 2021
|
How do I quantize the weights of neural networks?
|
|
1
|
441
|
August 5, 2021
|
Add quantization output configuration for QAT
|
|
2
|
1630
|
August 4, 2021
|
Clustering after pruning
|
|
2
|
1616
|
July 30, 2021
|
Why apply G-Zip after pruning or weight clustering?
|
|
5
|
1289
|
July 21, 2021
|
Pruning a trained Object Detection SavedModel
|
|
2
|
1924
|
July 20, 2021
|
'function_optimizer.py' returns empty graph
|
|
18
|
3372
|
July 9, 2021
|
No model size reduction in Tflite model size with integer Quantisation
|
|
6
|
2105
|
July 7, 2021
|