allenhaozi
寫(xiě)了 48001 字蔗彤,被 10 人關(guān)注川梅,獲得了 15 個(gè)喜歡
Base LLM has been trained to predict the next word base on text training data, often trained on a large amount of data from the internet and other sources to figure out what's the next most likely word to follow<br><br>Instruction Tuned LLM:<br>has been trained to follow instructions<br>You start off with a base LLMs that has been trained on a huge amount of text data and further train it further fine tune it<br>with input and outputs that are instructions and good attempts to follow those instructions call RLHF reinforcement learning to make the system better able to be helpful and follow instructions because instruction tuned LLMs have been trained<br><br><br><br><br><br>If you can't explain it simply, you don't understand it well enough. <br><br>Figure out what you love doing and don’t suck at, then try to figure out how to make a living doing that! Don’t be scared. We’re all going to die, it’s just a question of when and how<br><br>All problems in computer science can be solved by another level of indirection