I think you're absolutely right about the easiest approach. I hope you don't mind me asking for a bit more difficulty.
Wouldn't fine tuning produce better results so long as you don't catastrophically forget? You'd preserve more context window space, too, right? Especially if you wanted it to memorize years of facts?
I spend all of my time with image and video models and have very thin knowledge when it comes to running, fine tuning, etc. with language models.
How would one start with training an LLM on the entire corpus of one's writings? What model would you use? What scripts and tools?
Has anyone had good results with this?
Do you need to subsequently add system prompts, or does it just write like you out of the box?
How could you make it answer your phone, for instance? Or discord messages? Would that sound natural, or is that too far out of domain?