Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
thot_experiment
10 months ago
|
parent
|
context
|
favorite
| on:
Llasa: Llama-Based Speech Synthesis
No, you just condition it with text-voice token pairs and then when conditioning further inference w/ text the voice tokens tend to match the pairs further up in the context.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: