Shortly after China’s AI rules kicked in last month, a bunch of new AI chatbots started popping up in the market with the government’s thumbs-up. The rules have already been toned down from their original version, and according to experts, China hasn’t really cracked down on them as hard as it could. This whole regulatory gig from China is probably going to shake things up big time in the tech rivalry between them and their AI arch-nemesis, the US.
China’s Cyberspace Administration (CAC) dropped some of the toughest rules in the world for Generative AI on August 15. These rules basically say that the AI can’t go around creating stuff that’s like, “Let’s overthrow the government” or “Let’s spread some hate or violence.” It’s a no-go for terrorism, extremism, ethnic hatred, violence, smut, and fake news too.
Stopping AI chatbots from spitting out garbage or even nasty stuff has been a headache for AI nerds all over. If China really goes hard on their new rules, then Chinese AI folks might have a tough time keeping up, some experts reckon.
Also Read: Cutting-edge AI Tool holds promise to create ‘variant-proof’ vaccines
China’s delicate handling of AI regulations
The Chinese bigwigs know what’s up, so they’ve relaxed some of the rules and aren’t really busting chops on enforcement. It’s their way of finding the sweet spot between keeping a lid on touchy topics and pushing their AI game forward, according to the know-it-alls.
How they handle this juggling act won’t just mess with what Chinese folks can and can’t talk about, but it’s also going to steer how well their AI business does. And you can bet it’s going to make those American big-shots ponder over their own AI rules with this whole AI supremacy showdown heating up.
Come end of August, the CAC gave the green light for eight AI chatbots to hit the scene. You’ve got Baidu’s Ernie Bot and ByteDance’s Doubao in the mix, among others.
The rules they put out in July were actually easier than the ones they asked for feedback on back in April. Matt Sheehan from The Carnegie Endowment for International Peace, says the CAC tweaked three main things.
China’s supportive stance on generative AI development
First off, they narrowed down the targets from all generative AI use to only the stuff that’s out in the open, so the things they use behind closed doors aren’t as heavily watched.
Secondly, they toned down the tough talk in a few spots. For instance, “Be able to ensure the truth, accuracy, objectivity, and diversity of the data,” is now “Employ effective measures to increase the quality of training data, and increase the truth, accuracy, objectivity, and diversity of training data.”
Thirdly, the fresh rules threw in some words cheering on the growth of generative AI, whereas before it was all about punishment.
According to Sheehan, who’s all about China’s AI world, the CAC went easier on the rules, mainly because the Chinese economy wasn’t doing so great. Also, after a big chat involving brainy folks like think tank guys, researchers, government advisors, and industry insiders, they figured the rules were way too tough and could squash all the cool new ideas.
The unpredictable nature of these AI regulations
After they’ve made up their minds about the rules, it’s up to the bigwigs to decide how strictly they want to follow them, and it’s usually more random and not as steady as it is over in the West, says Sihao Huang from the University of Oxford. He’s been digging into AI rules over in Beijing for the past year.
“When we look at rules for recommendation algorithms that were published before, or deep synthesis, or the CAC cybersecurity laws—they are enforced when the CAC wants to enforce them,” says Huang (via Time). “Companies are on a pretty long leash, they can develop these systems very ambitiously, but they just need to be conscious that if the hammer were to come down upon them, there are rules that the government can draw on.”
According to Haung, it’s mostly about whether the CAC goes after a company or not, and that depends on whether the company is in the government’s good books or if they know the right people.
He also says tech companies like to dig up dirt on each other’s products and services to get the government to take action against their rivals. If the public gets riled up, the CAC feels the heat and starts cracking down too.
The China alarmists are saying that the US might end up lagging behind in the race to build supercharged AI systems, and if the US slaps on too many rules, China might just zoom past us. Huang doesn’t buy it, he says Chinese AI systems are already playing catch-up to their US counterparts, and these tough Chinese rules just make it worse.
Also Read: Deepmind’s AI tool can be beneficial in predicting genetic diseases
A comparison of Chinese and US chatbot capabilities
According to Jordan Schneider, a fellow at the Center for a New American Security, a think tank for all things military, the Chinese chatbots we’ve got now are not as fancy or capable as the ones over in the US.
Schneider also points out that it’s been easier to keep an eye on what these chatbots are blabbering than what developers and decision-makers initially thought, even in China. Other than that messy situation when Microsoft rolled out its Bing AI chatbot, the US hasn’t had many other issues, he says.
“American models are, by and large, not racist. Jailbreaking is very difficult and it’s patched very quickly. These companies have broadly been able to figure out how to make their models conform to what is appropriate discourse in a Western context. That strikes me as broadly a similar challenge [to what] these Chinese firms have [faced].”
So, Schneider’s saying that the whole give-and-take between keeping things cool politically and pushing ahead with development is blown out of proportion. He reckons that down the line, Chinese tech companies will keep on asking for a break from the rules if they can prove they’re falling behind.
But even so, Scheider mentions that even the tough guys will agree that we’ll need some rules to stop folks from getting all riled up about AI if it starts messing with their everyday lives too much, like taking away all the jobs through automation.