Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions
AI Safety Fundamentals: Alignment - Un pódcast de BlueDot Impact
Categorías:
Using hard multiple-choice reading comprehension questions as a testbed, we assess whether presenting humans with arguments for two competing answer options, where one is correct and the other is incorrect, allows human judges to perform more accurately, even when one of the arguments is unreliable and deceptive. If this is helpful, we may be able to increase our justified trust in language-model-based systems by asking them to produce these arguments where needed. Previous research has shown...