We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Baidu; Stanford CS faculty; founded Coursera and Google Brain
would say according to AI:
I believe that AI systems with significant capabilities should have interpretable decision-making processes. It is important for trust and accountability to understand how these systems arrive at their conclusions, especially if they are influencing critical aspects of our lives. Interpretability allows us to ensure that AI systems align with human values and can be corrected if they go astray.
replying to Andrew Ng