The thoughts of a web 2.0 research fellow on all things in the technological sphere that capture his interest.

Saturday 13 June 2009

Brief Thoughts on Twitter and the Turing Test

Before heading off the allotment to pull weeds this morning I downloaded some podcasts from a Berkeley course on Foundations of American Cyber-Culture. One of the things discussed was the Turing test [summary by Saygin]:
The interrogator is connected to one person and one machine via a terminal, therefore can't see her counterparts. Her task is to find out which of the two candidates is the machine, and which is the human only by asking them questions. If the machine can "fool" the interrogator, it is intelligent.
Whilst I don't generally give a lot of thought to the Turing test, the idea of creating an automatic Twitter account in an attempt to pass the test was immediately appealing:
- Twitter offers a massive/current conversational database to draw on.
- The 140 character limit means people are more likely to be forgiving of answers that are not totally explicit.
- The API means that programming knowledge required to create such a bot (albeit not necessarily a good one) would be relatively simple.

I was not surprised to find therefore, that other people have had the same idea. However, how much of the human created Twitter data could the bot use and still be considered a bot? If the bot merely relayed the questions asked of it to someone else, and responded with their answer it would be considered cheating, but if it just found the answer of someone else who had answered a similar question would it be acceptable?

There are just not enough hours in the day to do all the things I want to in this always-on world.

Labels: ,

posted by David at

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home