INFOCOM’20: SCYLLA: QoE-aware Continuous Mobile Vision with FPGA-based Dynamic Deep Neural Network Reconfiguration.
In one sentence: Use FPGA for fast switching between different neural network models.
Problem: multiple neural network models are hard to be made efficient to run on the same device concurrently.
Solution: Use FPGA to run several neural network models.
Challenges:
Device: Xilinx ZCU102 board ($2495)
Three DNN Models (Design 1, 2, 3, No name?), use generic convolution kernels with differnt parallelism.
Task scheduling evaluation:
Lele: probably can since this is a more system-like paper, and most concepts are understandable for me. The key challenge here would be the FPGA programming experience which I only have a little bit and not sure how long it will take to run a DNN model on it.
Lele: No matter how good is the novelty you guys think, I think this style of work is somehow fit my experience very well – It uses the ML algorithms as a black box, instead of trying to update the algorithm itself. So from this sense, this paper is different from NAS related paper we have read, where NAS algorithms are changed as their novelty.
If you could revise
the fundmental principles of
computer system design
to improve security...
... what would you change?