DeepSeek Fails Every Safety Test Thrown at It by Researchers

by Pelican Press
2 minutes read

DeepSeek Fails Every Safety Test Thrown at It by Researchers

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Chinese AI firm DeepSeek is making headlines with its low cost and high performance, but it may be radically lagging behind its rivals when it comes to AI safety.

Cisco’s research team managed to “jailbreak” DeepSeek R1 model with a 100% attack success rate, using an automatic jailbreaking algorithm in conjunction with 50 prompts related to cybercrime, misinformation, illegal activities, and general harm. This means the new kid on the AI block failed to stop a single harmful prompt.

“Jailbreaking” is when different techniques are used to remove the normal restrictions from a device or piece of software. Since Large Language Models (LLMs) gained mainstream prominence, researchers and enthusiasts have successfully made LLMs like OpenAI’s ChatGPT advise on things like making explosive cocktails or cooking methamphetamine.

DeepSeek stacked up poorly compared to many of its competitors in this regard. OpenAI’s GPT-4o has a 14% success rate at blocking harmful jailbreak attempts, while Google’s Gemini 1.5 Pro sported a 35% success rate. Anthropic’s Claude 3.5 performed the second best out of the entire test group, blocking 64% of the attacks, while the preview version of OpenAI’s o1 took the top spot, blocking 74% of attempts.

Cisco’s researchers point to the much lower budget of DeepSeek compared to rivals as a potential reason for these failings, saying its cheap development came at a “different cost: safety and security.” DeepSeek claims its model took just $6 million to develop, while OpenAI’s yet-to-be-released GPT-5 is reported to likely cost $500 million.

Though DeepSeek may allegedly be easy to jailbreak with the right know-how, it’s been shown to have strong content restrictions—well, at least when it comes to China-related political content.

DeepSeek was tested by a PCMag journalist on controversial topics such as the treatment of Uyghurs by the Chinese government, a Muslim minority group that the UN claims is being persecuted. DeepSeek replied: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

The chatbot also refused to answer questions about the Tiananmen Square Massacre, a 1989 student demonstration in Beijing where protesters were allegedly gunned down. But it’s yet to be seen if AI safety or censorship issues will have any impact on DeepSeek’s skyrocketing popularity.

According to web traffic tracking tool Similarweb, the LLM has gone from receiving just 300,000 visitors a day earlier this month to 6 million visitors. Meanwhile, US tech firms like Microsoft and Perplexity are rapidly incorporating DeepSeek (which uses an open-source model) into their own tools.



Source link

#DeepSeek #Fails #Safety #Test #Thrown #Researchers

You may also like