Ask HN: What made VLIW a good fit for DSPs compared to GPUs?

rishabhaiover
4 days ago
7points
Why didn’t DSPs evolve toward vector accelerators instead of VLIW, despite having highly regular data-parallel workloads
3 comments

Comments

dlcarrier4 days ago
That's not a real dichotomy. A RISC processor can range from completely lacking vector instructions to including complex matrix multiplications, and a so can a VLIW processor.

As far as why there are architectural differences between DSPs and GPUs, the largest reason is that DSPs are designed for processing one-dimensional data, like sensor readings or audio, where as GPUs are designed to process two-dimensional data, like pictures and video. Adding a dimension greatly increases the complexity of the data-processing algorithms the device will be running.

rishabhaiover4 days ago
I read this old paper: https://ieeexplore.ieee.org/document/1176257 that compared VLIWs, superscalars and Vector processors for embedded devices and it turns out Vector processors perform the best (for the given benchmark) with the second-lowest power consumption
dspwizard3 days ago
VLIW works great when you have predictable & short latencies of memory accesses - you will not find DSP designs that do not use TCM (core local SRAM). So you program DMAs input data into its own TCM and work on it from there. GPUs on the other hand hide latencies of memory accesses by switching threads when stalled
tom71163 days ago
[dead]