`Scale the model, scale the data, scale the GPU-farms' is the reigning
sentiment in the world of generative AI today. While model scaling has been
extensively studied, data scaling and its downstream impacts remain under
explored. This is especially of critical importance in the context of
visio-linguistic datasets whose main source is the World Wide Web, condensed
and packaged as the CommonCrawl dump. This large scale data-dump, which is
known to have numerous drawbacks, is repeatedly mined and serves as the
data-motherlode for large generative models. In this paper, we: 1) investigate
the effect of scaling datasets on hateful content through a comparative audit
of the LAION-400M and LAION-2B-en, containing 400 million and 2 billion samples
respectively, and 2) evaluate the downstream impact of scale on
visio-linguistic models trained on these dataset variants by measuring racial
bias of the models trained on them using the Chicago Face Dataset (CFD) as a
probe. Our results show that 1) the presence of hateful content in datasets,
when measured with a Hate Content Rate (HCR) metric on the inferences of the
Pysentimiento hate-detection Natural Language Processing (NLP) model, increased
by nearly $12\%$ and 2) societal biases and negative stereotypes were also
exacerbated with scale on the models we evaluated. As scale increased, the
tendency of the model to associate images of human faces with the `human being'
class over 7 other offensive classes reduced by half. Furthermore, for the
Black female category, the tendency of the model to associate their faces with
the `criminal' class doubled, while quintupling for Black male faces. We
present a qualitative and historical analysis of the model audit results,
reflect on our findings and its implications for dataset curation practice, and
close with a summary of our findings and potential future work to be done in
this area.