ICLR: Improving Reasoning in Language Models
In a recent paper, which will be presented in Singapore at the International Conference for Learning Representations, we assessed whether it was possible to modulate the perceived reasoning ability of an LLM using an approach called representation engineering. A key aspect of a transformer (the machine learning architecture on which (almost) all LLMs are based) is the transformation of high dimensional text representations. In this framing, a representation is a large vector, a long list of numbers, that represents the text that is currently being processed by the model. It is this representation that we attempt to manipulate.
Related work had shown that you could succesfully modulate this representation to induce specific types of “behavior” in an LLM. An example hereof could be to make the model output more or less positive text. This can be done by only manipulating the representation, if you can figure out how to manipulate the representation that is.
As it turns out, it is possible to do the same with the perceived reasoning ability of an LLM. We used the approach on various simple reasoning tasks, and managed to improve the performance of an LLM on these tasks by manipulating the aforementioned representations in a controlled manner. Knowing that this had been done for other types of “behavior”, it wasn’t a revolutionizing finding. It is however important given recent discussions concerning the abilities and intelligence of LLMs. In this debate, reasoning is often emphasized as a key component of intelligence. In these discussions, it seems that reasoning is often thought to be different from other types of information processing done by LLMs. Our results suggest that this is not the case.
Recent advancements in large language models (LLMs) have resulted in increasingly anthropomorphic language concerning the ability of LLMs to reason. Whether reasoning in LLMs should be understood to be inherently different is, however, widely debated. We propose utilizing a representation engineering approach wherein model activations are read from the residual stream of an LLM when processing a reasoning task. The activations are used to derive a control vector that is applied to the model as an inference-time intervention, modulating the representational space of the model, to improve performance on the specified task. The method allows us to improve performance on reasoning benchmarks and assess how control vectors influence the final logit distribution of a model via metrics such as KL divergence and entropy. We apply control vectors to Mistral-7B-Instruct and a range of Pythia models on an inductive, a deductive and mathematical reasoning task. We show that an LLM can, to a certain degree, be controlled to improve its perceived reasoning ability by modulating activations. The intervention is dependent upon the ability to reliably extract the model's typical state when correctly solving a task. Our results suggest that reasoning performance can be modulated in the same manner as other information-processing tasks performed by LLMs and demonstrate that we are capable of improving performance on specific tasks via a simple intervention on the residual stream with no additional training.
</div>