Instruction tuning is a process of further training large language models (LLMs) on a dataset consisting of (instruction, output) pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users objective of having LLMs adhere to human instructions. It teaches the model to understand and respond (in a sensible way) to instructions, versus just completing text. In contrast, fine-tuning is where a pre-trained AI model is trained on a specific task to specialize in a particular area. Instruction tuning is a form of fine-tuning but with an instruction dataset. The recent papers about fine-tuning seem to be about instruction tuning.