Is that true?” It’s a leading question, because the software works by taking a user’s textual input, squishing it through a massive model derived from oceans of textual data, and producing a novel, fluent textual reply. Early in a set of conversations that has now been published in edited form, Lemoine asks LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. And yet, as Lemoine would have it, the software has enough agency to change his mind about Isaac Asimov’s third law of robotics. The program that told it to him, called LaMDA, currently has no purpose other than to serve as an object of marketing and research for its creator, a giant tech company. Perhaps that’s because it’s a nightmare and a fantasy: a story that we’ve heard before, in fiction, and one we want to hear again. Indeed, rather than construing Lemoine’s position as aberrant (and a sinister product of engineers’ faith in computational theocracy), or just ignoring him (as one might a religious zealot), many observers have taken his claim seriously. “The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder,” the Post explains. Going by the coverage, Lemoine might seem to be a whistleblower activist, acting in the interests of a computer program that needs protection from its makers. Or if they have a billion lines of code.” After discovering that he’d gone public with his claims, Google put Lemoine on administrative leave. “It doesn’t matter whether they have a brain made of meat in their head. “ I know a person when I talk to it,” he told The Washington Post for a story published last weekend. A Google engineer named Blake Lemoine became so enthralled by an AI chatbot that he may have sacrificed his job to defend it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |