First there was the Turing Test, then the Lovelace Test. Now there’s Lovelace 2.0, an intelligence test for the next generation. While the focus of Turing’s methodology was all about the fake out, this test requires a computer program to gaze into its own navel and come up with something unexpected.
The first Lovelace Test, named for geek pioneer Ada Lovelace, theorized in 2001 that instead of tricking people into believing a computer was a human, it would be more effective if the computer could create something artistically unique, something beyond the output expected by the test’s creators.
“It’s important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” said Mark Riedl, an associate professor at Georgia Tech and the scientist behind Lovelace 2.0. “And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”
While the original Lovelace Test merely required the original creation to be a surprise, Lovelace 2.0 sets up parameters based on the originality of the computer’s creation and not necessarily the quality of it. In essence, if the computer creates an original song, poem or other piece of art, the only thing that counts is its uniqueness and not if humans like it or understand it. Someone break out the digital berets, because it sounds like computers may have a future in artistic expression.
Photo: Andrew Becraft