I read Andreessen Horowitz’s article Why AI Will Save the World, and this passage jumped out to me. Here Horowitz is talking about people who truly believe that AI will destroy humanity or the fabric of society:
‘My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!”’
Whether you agree or disagree with Horowitz’s view on AI is besides the point. The question here is: how do we evaluate for truth?
Often I’ve found myself frustrated on an issue. Whether I’m engaged in argument, I’m reading about an issue, or I’m hearing two people argue about it. I get frustrated because a question lingers in my mind “How can I test this is true?”
I’m of the Karl Popper way of doing this: can you falsify your hypothesis? Does your hypothesis provide a way for us to test, and definitively prove its false? If it doesn’t, then you can fall for the inductive fallacy. Y
This is just a fancy way of saying: you can’t smell if something is bullshit if there isn’t a way to test its false.
A high bar, but isn’t the truth that damn worthy?