What objective is STaR optimizing?
STaR: Bootstrapping Reasoning With Reasoning is a cool paper. It presents a method for training a reasoning model to produce better reasoning chains for problems it had trouble solving before.
In this post, I will show that the training objective of STaR is the normal ELBo in latent variable modeling, which has been studied pretty extensively in both QA and NLP in general.
Problem setup
STaR is a generative model that takes word problems and produces rationales and answers :
Both models on the RHS can be parameterized by the same LLM.
Example from the GSM8K dataset:
Problem Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?
Rationale and Answer Natalia sold 48/2 = <<48/2=24>>24 clips in May. Natalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May. #### 72
Ideally, we would optimize the marginal likelihood (evidence)
This is intractable, since we have to marginalize (sum) over all rationales .
Instead, we can optimize a tractable lower bound on the marginal likelhood (evidence), called the evidence lower bound (ELBo). We introduce a posterior rationalizer, , that potentially generates rationales given the problem and actual answer. The posterior rationalizer serves as a crutch to bypass the intractable marginalization.
How does this line up with the method presented in STaR?
STaR presents two methods for training the generative model. The first is just by sampling rationales from the generative model, . The second is by employing a rationalizer that gives hints after seeing the answer, . We will show that both of these can be written as variations of the ELBo.
ELBo (generative model)
Let’s first take a look at sampling rationales from the generative model. We reproduce Equation (1) from STaR here, which describes STaR without the posterior rationalizer (slight difference: we focus on a single datapoint and ignore model parameters).
Let’s try to recover this objective by setting the posterior rationalizer in the normal ELBo:
There are two interesting things to note here:
- The rewards are log-scaled in this derivation, while they are not in STaR (equation 4 vs 9).
- Pulling out the expectation over (to get equation 9) results in an even looser lower bound on the ELBo, due to another application of Jensen’s inequality.
In general, pulling out the expectation through the log results in additional bias and applying a Monte Carlo approximation of the expectation results in additional variance. It’s possible that ignoring the rationales that do not lead to correct sampled answers counteracts this additional bias and variance in helpful ways.
ELBo (rationalizer)
When employing the rationalizer that sees true answers before producing rationales, the objective should be the exact ELBo presented earlier,
Is this what STaR optimizes? The short answer is yes, with the same caveats as the previous approach. Namely, the expectation wrt is pulled out, resulting in only rationales where the sampled being trained on, as opposed to weighting by .
Let’s translate the STaR pseudocode from Algorithm 1 in their paper and point out what each step corresponds to. At each iteration of STaR,
This corresponds roughly to setting
STaR only adds the rationalizer’s rationales if the prior’s rationales are incorrect. This helps keep the rationales easy to model for the prior. The KL term in equation 10 should also achieve this, if is trained through the ELBo.
What is the point of analyzing things as latent variable models?
Formal frameworks serve to guide development by making the tradeoffs between different choices clear and composable. In the case of STaR, it could help us improve the method by analyzing the bias-variance tradeoffs of different design choices. It would also give us a principled way of conditioning on previous rationales. For example, if it is difficult to find a correct rationale even after conditioning on , we could introduce a rationale editor that can incorporate feedback.