Is Logicracy vulnerable to automated client “bots” posing as humans?
Many interactive Internet sites suffer from abuse by “bots”, which are client-side computer programs masquerading as humans. Sites often employ tests for (hopefully) discriminating bots from humans. Such a test is called a captcha (Completely Automated Public Turing test to tell Computers and Humans Apart). The best captcha must be designed to maximize the ratio of a human’s performance to that of a machine, and the tests must be given frequently and require prompt answers so that a reliable ratio is measured and so that bots can’t conveniently relay hard tests to groups of real humans that were somehow compensated for their help. Since Logicracy just happens to do exactly that–require prompt, frequent, answers to questions that require thought–Logicracy is a natural captcha of the most reliable kind. Nevertheless, for added security, users must answer a simple (one-mouse click) image captcha for each question asked. This is a distorted image of text that must be resolved by the user