Giving GPT-3 a Turing Test 📦

6    29 Jul 2020 13:59 by u/mutageno

3 comments

3
Q: Who won the World Series in 2021? A: The New York Yankees won the World Series in 2021. Q: Who won the World Series in 2022? A: The New York Yankees won the World Series in 2022. Q: Who won the World Series in 2023? A: The New York Yankees won the World Series in 2023. Oh God, please kill me.
3
Very cool. The article does highlight 2 of my pet peeves about the DNN ML craze these days. Yes, it's great for a lot of soft, mushy tasks, but there are some very serious flaws compared to hardcoded logic. 1. "Recursive logic that does some operation and repeats it several times often doesn’t quite map onto the architecture of a neural net well" (in the context of GPT-3 screwing up some arithmetic questions). There are a lot of computationally simple problems which are just flat-out impossible for most models to do. Not just hard or "we haven't got the right model yet", but just flat-out impossible. It's *impossible* to teach a computer how to multiply given our current NN-based models. Nutso. All this work to build a computer that can't be taught grade 1 arithmetic. 2. "I wish I had some sort of “debug output” to answer that question. I don’t know for sure, but I can only theorize that ..." (in the context of surreal answers). This is my #1 warning to people who are considering an ML solution. Do you want the solution to *kind of work* without you having any idea why it's working, and when it doesn't work, it doesn't tell you why? Do you want no guarantees about the behaviour you get?
1
[Follow up to this](https://arr.am/2020/07/25/gpt-3-uncertainty-prompts/)