For nearly two decades, CAPTCHAs have been widely used as a means of
protection against bots. Throughout the years, as their use grew, techniques to
defeat or bypass CAPTCHAs have continued to improve. Meanwhile, CAPTCHAs have
also evolved in terms of sophistication and diversity, becoming increasingly
difficult to solve for both bots (machines) and humans. Given this
long-standing and still-ongoing arms race, it is critical to investigate how
long it takes legitimate users to solve modern CAPTCHAs, and how they are
perceived by those users.
In this work, we explore CAPTCHAs in the wild by evaluating users' solving
performance and perceptions of unmodified currently-deployed CAPTCHAs. We
obtain this data through manual inspection of popular websites and user studies
in which 1,400 participants collectively solved 14,000 CAPTCHAs. Results show
significant differences between the most popular types of CAPTCHAs:
surprisingly, solving time and user perception are not always correlated. We
performed a comparative study to investigate the effect of experimental context
-- specifically the difference between solving CAPTCHAs directly versus solving
them as part of a more natural task, such as account creation. Whilst there
were several potential confounding factors, our results show that experimental
context could have an impact on this task, and must be taken into account in
future CAPTCHA studies. Finally, we investigate CAPTCHA-induced user task
abandonment by analyzing participants who start and do not complete the task.