What can be learned about causality and experimentation from passive data?
This question is salient given recent successes of passively-trained language
models in interactive domains such as tool use. Passive learning is inherently
limited. However, we show that purely passive learning can in fact allow an
agent to learn generalizable strategies for determining and using causal
structures, as long as the agent can intervene at test time. We formally
illustrate that learning a strategy of first experimenting, then seeking goals,
can allow generalization from passive learning in principle. We then show
empirically that agents trained via imitation on expert data can indeed
generalize at test time to infer and use causal links which are never present
in the training data; these agents can also generalize experimentation
strategies to novel variable sets never observed in training. We then show that
strategies for causal intervention and exploitation can be generalized from
passive data even in a more complex environment with high-dimensional
observations, with the support of natural language explanations. Explanations
can even allow passive learners to generalize out-of-distribution from
perfectly-confounded training data. Finally, we show that language models,
trained only on passive next-word prediction, can generalize causal
intervention strategies from a few-shot prompt containing examples of
experimentation, together with explanations and reasoning. These results
highlight the surprising power of passive learning of active causal strategies,
and may help to understand the behaviors and capabilities of language models.