In complex and stochastic environments the ability to cope with the unexpected is essential for survival. This paper describes a motivational framework founded on the need to reduce uncertainty. It is centred around a merging of classifier systems, taken from the field of artificial intelligence, with the information-primacy approach to animal motivation. It is proposed that in order to deal with uncertainty the animal constructs cognitive models of its environment that are composed of hierarchies of condition-action rules. There is parallel activation of several rules at any given time, and these rules compete to determine behaviour. The rules found to be the best predictors (and may in addition have resulted in reinforcement) gain strength, whilst the less successful rules lose strength over time. Unexpected events trigger the generation of families of new rules which are then subject to environmental selection. The efficient operation of the cognitive model requires the continual reduction of uncertainty, so that information-gathering behaviour forms a substratum upon which other, more obviously goal directed, behaviours occur. High need states can break into this ongoing behaviour and give it a special direction. The framework is related to the inherent variability of behaviour, the failure of certain reinforcement contingencies to control behaviour, and approach/avoidance behaviour towards novel stimuli.