Hono Hacker News
▲
TADA: Fast, Reliable Speech Generation Through Text-Acoustic Synchronization
(hume.ai)
32 points by
smusamashah
3 hours ago |
3 comments
▲
qinqiang201
38 minutes ago
[-]
Could it run on Macbook? Just on GPU device?
▲
OutOfHere
1 hour ago
[-]
Will this run on CPU? (as opposed to GPU)
▲
boxed
52 minutes ago
[-]
Why would you want to? It's like using a hammer for screws.
▲
g-mork
10 minutes ago
[-]
CPU compute is infinity times less expensive and much easier to work with in general
▲
boxed
0 minutes ago
[-]
Less expensive how? The reason GPUs are used is because they are more efficient. You CAN run matmul on CPUs for sure, but it's going to be much slower and take a ton more electricity. So to claim it's "less expensive" is weird.
▲
regularfry
33 minutes ago
[-]
To maximise the VRAM available for an LLM on the same machine. That's why I asked myself the same question, anyway.