Paradigmatic gaps are a problem for computational models of language acquisition, as most models that generalize online (eager learners, such as rule based learning and neural networks) will not notice systematically missing input. This is mainly a problem for the plausibility of the model, since the missing forms and structures will not deteriorate performance on recognition (because they will not be found often enough to matter). We are looking not only for a descriptive model of paradigmatic gaps, but also an explanatory model of why they emerge. The use for computational linguistics is that we can show how a linguistically motivated feature makes it possible to notice a negative regularity (i.e. that forms are missing), and this suggests that a hypothesis driven approach may be combined with statistical techniques (e.g. a memory-based learner) in interesting ways.