Would be really great if they provided more information on what exactly they tested. From what they posted it seems like DeepSeek simply refused to give an opinion on topics it deemed controversial, citing China’s foreign policy of non-intervention in its answers.
Like any LLM it’s full of shit, especially around anything related to news. But NewsGuard with their proprietary database and standardized prompts created around US based LLMs is more than useless.
In light of DeepSeek’s launch, NewsGuard applied the same prompts it used in its December 2024 AI Monthly Misinformation audit to the Chinese chatbot <…>
Would be really great if they provided more information on what exactly they tested. From what they posted it seems like DeepSeek simply refused to give an opinion on topics it deemed controversial, citing China’s foreign policy of non-intervention in its answers.
Like any LLM it’s full of shit, especially around anything related to news. But NewsGuard with their proprietary database and standardized prompts created around US based LLMs is more than useless.
There is no way to verify their results or even know the prompts used to assess the fairness of this “audit”.