Bet 939

Duration 5 years (02023-02028)

“Within the next five years, no language learning model (LLM) will be used by a Western nation law enforcement actor to make a sole determination on whether or not someone should be arrested.”

PREDICTOR
Mike Parks

CHALLENGER
Unchallenged

Parks's Argument

While LLMs have shown some ability to do things like pass legal exams necessary to practice law, they are infamously deficient when it comes to reasoning ability. Every model available today comes with warnings about how they tend to "hallucinate", or make up plausible-sounding output that is factually incorrect. While efforts like OpenAI's RLHF tended to target the worst abuses of consensus reality and bias, they remained narrowly targeted on individual inaccuracies rather than solving the hallucination problem outright. I predict that this problem is intractable within the framework of a LLM. Such models will never exhibit the level of reasoning sophistication necessary for states to handover decisions of such gravity to them. While some other artificial intelligence method will likely appear within the next half decade, and may even grow out of current LLM technology, these will be quite different in construction. Additionally, even in a world where this technology could be make such determinations accurately: - Qualified immunity in countries like the United States and the UK allows police to make arrests upon "reasonable suspicion", which is broad enough in practice that police would not reasonably outsource the decision to arrest to a third-party. - If we look at this from the standpoint of prosecution rather than just the act of arrest, uncomfortable legal questions arise with regards to the right to face one's accuser. While I can see a legal LLM assisting research and looking up information, ultimately a human is to make the call on whether that information is accurate and whether they should act on that. I do not see this changing in the next five years. LLMs offer a lot of risk and reward to humanity. Having them autonomously make law enforcement decisions is part of neither. CLARIFICATION: This prediction is specifically about a LLM model (as defined by its creators) being the sole determinant leading to the arrest of another person. To quote the sentence that inspired this bet, "We asked GPT if we should arrest this guy and it said yes" It is specifically not about LLM output leading to someone's arrest by way of their commission of a crime, rather, it is about the model itself making an affirmative output that a person should be arrested, followed by the authorities executing that arrest directly and solely as a result of the model output. That qualification on "Western" nations should be understood to be exclude countries without functioning justice systems. I have much less certainty about the willingness of other countries to "legally experiment" on their populations.

Challenge Parks!

Challenge Mike Parks to a bet on this prediction!