More

    Turing Test in Artificial Intelligence

     

    A Turing Test is a simple technique of inspection in artificial intelligence (AI) for deciding whether a machine can demonstrate human intelligence or not. It was formulated by Alan Turing an English computer scientist, theoretical biologist, mathematician, and cryptanalyst in 1950 and is named after him. Turing proposed that a computer can be said to have artificial intelligence if it can imitate human reactions under particular circumstances.

    Turing Test Photo from https://en.m.wikipedia.org/wiki/Turing_test

    During the test, one of the humans takes up the part of the questioner, while the second human and the computer function as respondents. The questioner asks questions to the respondents within a particular subject area, using a determined format and context. The test is conducted many times. If the questioner gives a valid decision in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as “just as human” as the human respondent.

    History of Turing Test

    The test is named after Alan Turing, who introduced the test in his 1950 paper called “Computing Machinery and Intelligence” while at the University of Manchester. In his paper, Turing suggested a twist on what is called “The Imitation Game.” The Imitation Game implicates no use of AI, but rather three human participants in three different rooms. Each room is connected to a screen and keyboard, one with a male, the other a female, and the third one containing a male or female judge. The female attempts to persuade the judge that she is the male, and the judge attempts to figure out which is which.

    Turing alters the concept of this game to comprise an AI, a human, and a human questioner. The questioner’s job is accordingly to determine which is the AI and which is the human. Since the formation of the test, many AIs have been able to pass; one of the first is a program developed by Joseph Weizenbaum called ELIZA.

    Limitations of the Turing Test

    The Turing Test has been criticized over the years, in particular, because historically, the nature of the questioning had to be limited for a computer to exhibit human-like intelligence. Moreover, when questions were open-ended and required conversational answers, it was less likely that the computer program could successfully fool the questioner.

    Also, a program such as ELIZA could pass the Turing Test by manipulating symbols it does not understand fully. Many people even argued that this does not determine intelligence comparable to humans.

    Variations to the Turing Test

    There are several variations available to the Turing Test to make it more relevant. The variations are as follows:

    • Minimum Intelligent Signal Test – Where only yes or no and true or false questions are available.
    • Total Turing Test – Where the questioner can test the ability to alter objects as well as perceptual abilities.
    • Reverse Turing Test – Where a human attempts to persuade a computer that it is not a robot. An example of this is a CAPTCHA verification.

    Alternatives to the Turing Test

    Many even evaluate the Turing Test as flawed so some alternatives even became known later. Alternatives are as follows:

    • Winograd Schema Challenge – This is a test that puts forward multiple-choice questions in a particular layout.
    • The Marcus Test – A program that can ‘watch’ a television show is tested by being inquired about significant questions about the show’s content.
    • The Lovelace Test 2.0 – It is a test made to inspect AI by assessing its ability to build art.

    Recent Articles

    Mabl Introduces Native Desktop Application with API and Mobile Test Automation Capabilities

      Mabl, the prominent intelligent test automation firm, proclaimed on 24th February the beta release of their recent native desktop application that authorizes users to...

    Software testing company Qualitest acquires QA InfoTech

      Qualitest, the world's largest independent managed services provider of quality assurance and testing solutions, announced on 18th February 2021 that they have acquired QA...

    Beginners guide to Submit Paper for Software Testing Conferences

      Software Testing Conferences have become extremely important nowadays with constant changes in techniques, and up-gradation of technology, it is extremely important for Testers to...

    Google’s Payout to Bug Hunters Hits All-time high of $6.7 Million

      Google announced on 4th February 2021 that it has paid over $6.7 million in reward to 662 security researchers across 62 countries for catching...

    Provar Secures $17M in Series A Funding

      London-based Provar is a company that assists clients and partners in making Salesforce better with repeatable and manageable test automation. It pairs instinctive testing...

    Related Stories

    Stay on op - Ge the daily news in your inbox