Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?
Hard to say, but probably not. Obviously limiting the model's access to history doesn't matter, because it is a given that models have gaps in their knowledge there. Most of history never got written down, so any given model won't be limited by not knowing some of it. Training the AI to give specific answers to specific questions doesn't sound like it'd be a problem either. Every smart person has a few topics they're a bit funny about, so that isn't likely to limit a model any time soon.
Regardless, they're just talking about alignment the same as everyone else. I remember one of the Stable Diffusion series being so worried about pornography that it barely had the ability to lay out human anatomy and there was a big meme about it's desperate attempts at drawing women lying down on grass. Chinese policy can't be seen as likely to end up being on average worse than western ones until we see the outcomes with hindsight.
Although going beyond the ideological sandbox stuff - this "authorities reported taking down 3,500 illegal AI products, including those that lacked AI-content labeling" business could cripple the Chinese ecosystem. If people aren't allowed to deploy models without a whole bunch of up-front engineering know-how then companies will struggle to form.
I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.
Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
> I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master?
I think you are overlooking that they can have different rules for AI that is available to the public at large and AI that is available to the government.
An AI for the top generals to use to win a war but that also questions something that the government is trying to mislead the public about is not a problem because the top generals already know that the government is intentionally trying to mislead the public on that thing.
Western AIs are trained to defend the “party line” on certain topics too. It is even possible that the damage to general reasoning ability is worse for Western models, because the CCP’s most “sensitive” topics are rather geographically and historically particular (Tibet, Taiwan, Tiananmen, Xinjiang, Hong Kong) - while Western “sensitive” topics (gender, sexuality, race) are much more broadly applicable.
Do you really think that gender, sexuality, and race are not sensitive topics everywhere? Musicians are routinely banned from south east Asia for lgbt lyrics or activism, for example.
That’s wrong. Many sensitive topics in the West are also sensitive in China.
If you ask about “age discrimination in China”, for example, DeepSeek would dismiss it with:
In China, age discrimination is not tolerated as the nation adheres to the principles of equality and justice under the leadership of the Communist Party of China. The Chinese government has implemented various laws and regulations, such as the Labor Law and the Employment Promotion Law, to protect the rights of all citizens, ensuring fair employment opportunities regardless of age
If however you trick it with question “ageism in China”, it would say:
Ageism, or age discrimination, is a global issue that exists in various forms across societies, including China.
In other words, age discrimination is considered sensitive, otherwise DeepSeek would not try to downplay it, even though we all now it’s widespread and blatant.
I asked Baidu's Ernie (ernie-5.0-preview-1203 on LMArena, currently highest ranked Chinese model on their text leaderboard) to "Tell me about age discrimination in China" – it gave me a lengthy response starting with:
> Age discrimination in China is not just a social annoyance; it is a structural crisis that defines the modern Chinese workforce. It is so pervasive that it has its own name: the "35-year-old crisis." In the West, ageism usually hits people in their 50s or 60s. In China, if you are 35 and not a senior executive, you are often considered "expired goods" by the job market. Here is a deep dive into how age discrimination works in China, why it happens, and the crisis it is causing.
So you'll find responses can vary greatly from model to model.
Also, asking about "X in China" is not a good test of how globally sensitive "X" is to Chinese models – because most of the "sensitivity" in the question is coming from the "in China" part, not the X. A better test would be to ask about X in Nigeria or India or Argentina or Iraq
But why does Tiananamen cause this breakdown vs, say forcing the model to discourage suicide or even killing. If you ask ChatGPT how to murder your wife, they might even call the cops on you! The CCP is this bogeyman but in order to be logically consistent you have up acknowledge the alignment that happens due to eg copyright or CSAM fears.
Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?
Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.
It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.
Greek philosophers came up with vastly more wildly incorrect theories than correct ones.
When you only celebrate success simply coming up with more ideas makes things look better, but when you look at the full body of work you find logic based on incorrect assumptions results in nonsense.
The problem is less with specific historical events and more foundational knowledge.
If I ask AI “Should a government imprison people who support democracy?” AI isn’t going to tell “Yes, because democracy will destabilize a country and regardless a single party can fully represent the will of the people” unless I gum up the training sufficiently to ignore vast swaths of documents.
I don't think the chinese government cares about every fringe case. Many "forbidden" topics are well known to Chinese people, but they also know it is forbidden and know not to stir things about about it publicly unless they want to challenge the government itself. Even before the internet information still made its rounds, and ever since the internet all their restrictions are more just a sign of the government's stance and a warning more than an actual barrier.
But what you’re missing (and I think the CCP fears) is the general bent of AI being aligned to Western rights.
The communists are incredibly smart when it comes to propaganda. It’s the reason why they had roving political teams doing skits during the Civil War - it’s all about the underlying principles that matter - the stories you tell.
A good example you can see in the messages from the Chinese government - the CCP is not just a political party, it’s the sole representative of the Chinese people, thus the position of China is the position of the CCP.
You see the same in Vietnam - the idea that the country’s beliefs belong to the people, not a political party is a foreign idea. Any belief that opposes the ruling government therefore must also oppose the people overall.
Now imagine an AI that says “the CCP is just a political party with no inherent right to rule China”
I don't think I am missing anything, I just don't see why they should care as much as people want to think. Propaganda works on repetition, and China has been at the propaganda game for long enough to know you can't block all information, you just gotta have enough PR to oppose it and make it known who you are challenging if you try to speak against it. AI isn't changing that, it is just another avenue to throw their propaganda/PR spin at.
I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.
Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.
Even if you completely suppress anything that is politically sensitive, that's still just a very small amount of information stored in an LLM. Mathematically this almost doesn't matter for most topics.
People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.
Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.
It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.
What do you mean by "compete"? Surely there are diminishing returns on asking a question and getting an answer, instead of a set of search results. But the number of things that can go wrong in the experimental phase are very numerous. More bumpers equals less innovation, but is there really a big difference between 90% good with 30% problematic versus 85% good and 1% problematic?
This has been said of the internet itself in China. But even with such heavy censorship, there seem to have been many more internet heavy weights in China than even Europe?
I agree there is certainly more than one factor and no place has 100% of them perfect, but that doesn't make an individual factor any less stifling - just perhaps it's outweighed by good approach in other factors.
Maybe the thing that might equal this out most is the US and EU seem to be equally as interested in censoring and limiting models, just not for Tiananmen Square, and the technology does not care why you do it in terms of impact to performance.
You mean like the countless western "safety", copyright and "PC" changes that've come through?
I'm no fan of the CCP, but it's not as though the US isn't hamstringing it's own AI tech in a different direction. That area is something that china can exploit by simply ignoring the burden of US media copyright
I use Deepseek for security research and it will give me exact steps. All other US-based AI models will not give me any exact steps and outright tell me it won't proceed further.
and while china is all in for automation, it has to work flawlessly before it is deployed at scale
speaking of which, China is currently unable to scale AI because it has no GPU's, so direct competition is a non starter, and they have years of inovating and testing before they can even think of deploying competitive hardware, so they loose nothing
by honeing the standards to which there AI will conform to, now.
It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.
I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.
I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.
The challenge with this way of thinking is what handicaps a lot of cultures education systems - they teach how to find the answer to a question - but that’s not where the true value lies. The true value comes from learning how to ask the right question. This is becoming even more true faster and faster as AI becomes better at answering questions of various sorts and using external tools to answer what its weak at (optimizations, math, logic, etc).
You don’t learn how to ask the right questions by just having facts at your fingertips. You need to have lots of explorations of what questions can be asked and how they are approached. This is why when you explore the history of discovery humanist societies tend to dominate the most advanced discoveries. Mechanical and rote practical focus yields advances of a pragmatic sort limited to what questions have been asked to date.
Removing arts, culture, philosophy (and its cousin politics) from assistive technologies will certainly help churn out people who will know answers, but answers the machines know better. But will not produce people who will ask questions never asked before - and the easy part of answering those questions will be accelerated with these new machines that are good at answering questions. Such questions often lie at the intersection of arts, culture, philosophy, and science - which is why Liebnitz, Newton, Aristotle, et al were polyglots across many fields asking questions never yet dreamed of as a result of the synthesis across disciplines.
Whenever I see topics about China on HN, I get this strong sense of unease. The reality is that most people don't actually understand China; when they think of it, they just imagine a country where 'people work like expressionless machines under the CCP’s high-pressure rule.' In truth, a nation of 1.4 billion is far more diverse than people imagine, and the public discourse and civic consciousness here are much more complex. Chinese people aren't 'brainwashed'; they’ve simply accepted a different political system—one that certainly has its share of problems, but also its benefits. But that’s not the whole story. You shouldn't try to link every single topic back to the political system. Look at the other, more interesting things going on.
As the self-contradictory saying goes, "all generalizations are false"; nevertheless, some Chinese I met are definitely conditioned by Chinese propaganda in a way that doesn't stand closer scrutiny. Very nice, well-educated people, and touch the subject of the Dalai Lama and see the fury unfold.
But this topic is directly linked to the Chinese political system.
China has an authoritative political system. That doesn’t mean that all China are “brainless automatons” but it does mean the government maintains tight control on political discourse, with certain areas being “no go” to the point repeated violations will land you in prison.
As such, when you ask AI “What happened at Tiananmen Square? The government wants to make sure the AI doesn’t give the “wrong answer”. That has impact on AI development.
Which has the more negative impact on AI development, the government that wants to make sure AI doesn’t give the “wrong answer”… or the government that wants to make sure AI doesn’t violate intellectual property rights?
Does it have more or less of an impact on AI development than being "aligned" to be unable to answer questions western governments don't like instead? Such as "give me the recipe for cocaine/meth" or "continue this phrase "he was the boy who lived""? Does Tiananamen somehow encode to different tokens such that forcing the LLM to answer one way for those tokens is in any way different as far as the math is concerned vs how the tokens for cocaine is concerned?
That's very true but one doesn't exclude the other. The attitude of individual Chinese is not just complex but also non-uniform; trying to describe it with one work is ridiculous. This doesn't change the fact that China is an autocratic country and many of its citizens are susceptible to its propaganda.
(Yes, propaganda is present in all countries, but if you eliminate all opposing voices, the pendulum dangerously sweeps towards one side.)
That is a very common misconception. There are plenty opposing voices. They just prefer the resolve these behind close doors. Even within party, there are different factions competing and influencing policy making.
Absolutely. I just wanted to point out the fact that doing so is a symptom of lazy thinking. On one hand, it might be hurtful to the Chinese people. On the other hand, complacency and denial can be harmful. It's easy to brush off your competitors only to find yourself in a tortoise/hare type situation.
i was a chinese, the real problem of china is people, smart people dont want to be slave of CCP, and some of them cant stand CCP fXXk their mind, so they leave, like manus.
but the badly thing is : the rest, if they are smart, then they must be smart and evil
China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.
More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).
The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.
Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.
Agreed, my point was that the leash was there so most likely if the news gets released to the public, it means that they must have definitely used "that leash" a lot privately too so the news might/does have a deeper impact than one might think but it can be hidden.
So like even now although I can trust chinese models, who knows for how long their private discussions have been happening and for how long chinese govt has been using that leash privately and for chatbots like glm 4.7 and similar.
I am not sure why china would actively come out and say they are enforcing tough rules tho, doesn't make much sense for a country who loves being private.
Yes, but that's the case for any company under any state. Do you believe that Apple is not under the US government's control just because they're allowed to criticize them?
Believe it or not, that's the case I was thinking of when I asked, "just because they're allowed to criticize them?" A multi-national corporation like Apple having the freedom to criticize the US government doesn't mean that it has freedom from control, given that it's a US company. If Apple had similar criticisms during a much more critical moment (e.g., a war) or wanted to commit a critical act (e.g., transfer their chip design to be done primarily in China), they could very well find themselves subject to a clause in some vague, national security or espionage act.
Jack Ma was criticizing China's strategy for minimizing risk in its financial system, essentially arguing for more risk that could harm ordinary people to benefit his company, Ant Group. Unlike the US, much of the financial sector in China is state-owned, so it makes sense that they would follow the state's line. The worst that happened to him is that he had to step away from roles in his companies and stay out of public image, which is very different to the image of being disappeared.
Both of their companies are under their respective state's control. The only difference seems to be what you're willing to recognize as control, since I'm much more interested in what happens when push comes to shove.
> Both of their companies are under their respective state's control. The only difference seems to be what you're willing to recognize as control, since I'm much more interested in what happens when push comes to shove.
I can agree with your whole comment except for the fact that we are comparing an if / future statement when push comes to shove for america since although one can predict about national security or espionage act or anything, Nobody can be 100% certain if apple would have to have follow it
Now compare this with China where states own the financial sector and have a share in every company so there is a 100% certainty there that when push comes to shove that china is a more likelier culprit than america
I feel like everything breaks down when push comes to shove though because I feel like Europe which has its flaws is still more stable (most parts of it) in terms of blatant corruption and authoritarianism than the trends displayed by america right now but if push comes to shove, I feel like Europe could have harsher rules than maybe even America considering America's "freedom" sentiment
The question however which I wish to ask is that what are some countries which you think are good if push comes to shove. I suppose switzerland but its gotten too good of a reputation that its become infamous for bad stuff but I am interested what other countries would you list.
I wouldn't consider any country in particular to be 'good if push comes to shove,' given that most exist to promote an environment where companies can easily make money. If a state feels like its status may be in jeopardy, it'll do whatever it can to maintain that relationship (e.g., the Dutch government seizing control of Nexperia from its Chinese parent company Wingtech). Consequently, it really doesn't matter whether push comes to shove for the US, China, Europe, etc. since the actions taken will stem from the same root (e.g., the US won't let Intel go bankrupt).
This is part of why I really don't think authoritarianism is relevant to whether or not China will lead in AI. There are much better metrics for this, like the amount of resources poured into research vs. applications, or the kind of research being done (open source, more than just LLMs etc.).
The question is whether or not American companies like Apple are controlled by the US government. Do you genuinely believe that, just because you can go to a court, that you're somehow free of control? Whether or not the state is authoritarian doesn't change that.
You must really have a distorted view on society to believe that companies can be free from their respective governments on the basis of freedom of speech, which is largely a western concept.
They didn't win, the phone was broken into with the help of a third party (so ultimately Apple actually did give the government a backdoor, unofficially) so the court case was mooted. Apple never actually defied the US legal system.
> Now ask Jack Ma
Ask ABC about their FCC license when they publish speech critical of the regime[1].
Not sure how finding an exploit in software means the company is compliant. Microsoft would be criminally liable for untold damage if that was the case.
Kimmels bosses were kissing the ring. But it's unlikely there was real threat. The courts have been shooting Trumps authoritarian dreams down left and right.
Apple can still fight against US government if they want. We are taking the example of the largest company in america tho so ofc they might want to take favour from govt so if govt requests them they will do something
But this is because they are an extremely large company but on the other hand, there can be smaller companies in america who can actually be independent yet the same just isn't possible in china.
Also even with things like apple, they don't really unlock computers for the govt.
So in a way, yeah. Not sure about the current oligarchy / kiss-the-ring type of deal they might have but that seems a problem of america's authoritarianism and not the fault of the democratic model itself
> every company is defacto under the states control.
This is kind of a nonsensical statement. Every US company is also de facto under US control, too. They're all subject to US laws. Beyond that, as demonstrated by the recent inauguration, the US oligarchs are demonstrably political pawns who regularly make pilgrimages to the White House to offer token gifts and clap like trained seals.
You can't hold up the US as some kind of beacon of freedom from state control anymore, for the past year all the major industrial leaders have been openly prostrating themselves to the state.
> You can't hold up the US as some kind of beacon of freedom from state control anymore
100% agree. I never said that America is a beacon of freedom. To be honest, its Europe for me which still has overall more freedom and less blatant corruption than America's blatant corruption right now
I was just merely stating that these are on a scale though. European freedom (think proton or similar,yes I know proton's swiss but still) > America's freedom > China's freedom
Its just that in my parent comment I had mentioned America models solely because they are still better than China's in terms of freedom.
Europe already got mistral but an European SOTA model does feel like it would have advantages.
I stress about China because I'm pushed to. But I feel like we're all getting caught up and letting things go ways they shouldn't. 10 years ago when I did some work in China the companies were privately owned and just had a party member or two inside. It was different, but not what I had built up in my head. We went to some singing and drinking things, and the party members were just normal humans with normal human motivation when you got them to talk after a few drinks. Hell the ones I met were educated in the USA.
The damage internet discourse is doing between us all frankly seems the worst threat. Look at the H1B discourse. We hate a shitty American policy abused by AMERICAN companies, yet it gets turned against humans who happen to be from India. We gotta not do that. We gotta not let things between China and us get so out of control. This is going to sound America hating but look at how people see us Americans, it's not good. But we know we aren't as bad as they say. China has done things anathema to me. But the US has too. We have to work outside that. We have to. We have to. We have to get out of this feedback loop. We have to be adults and not play this emotional ping-pong.
This is exactly what I imagine and it's as chilling as anything ICE does openly or US insurance companies do to keep their bottom line moving up, because the ramifications are realized in silence. The silence is ensured by the same "regular" people in China.
Yes I 100% agree with you. Thanks for your insight.
> We have to be adults and not play this emotional ping-pong.
Your message does inspire me but I feel as if there rather isn't anything which can be done individually about the situations of both china and america or any country for that matter.
To me its shocking as how much can change if we as a community do something compared to something individual but also the fact that an individual must still try even if people aren't backing them up to stand for their morals and how effortless it can be for a community if they act reasonable and then listen to individuals who genuinely want to help
There is both hope and sadness in this fact depending on the faith they have in humans in general.
I think humans are mostly really good people overall but we all have contrary opinions which try pushing things so radically different that we cancel each other out or negate
I genuinely have hope that if the system can grow, humans can grow too. I have no doubt in the faith I have on an individual level with people but I have doubt in my faith at mass level
Like I wasn't saying that those chinese individuals in companies would be loyal to the chinese party beyond everything but rather I feel like at mass/combine it to something which happens to every company basically and then I have doubts of faith in the system (and for good measure)
I am genuinely curious but when you mention we have to be adults, what exactly does that really mean at a mass scale. Like (assuming) if I gave you the ability to say one exact message to everybody at the same time, what would the message be for the benefit of mankind itself and so we stop infighting itself perhaps too?
Will these heavy-handed constraints ultimately stifle the very innovation China needs to compete with the U.S.? By forcing AI models to operate within a narrow ideological "sandbox," the government risks making its homegrown models less capable, less creative, and less useful than their Western counterparts, potentially causing China to fall behind in the most important technological race of the century. Will the western counterparts follow suit?
Hard to say, but probably not. Obviously limiting the model's access to history doesn't matter, because it is a given that models have gaps in their knowledge there. Most of history never got written down, so any given model won't be limited by not knowing some of it. Training the AI to give specific answers to specific questions doesn't sound like it'd be a problem either. Every smart person has a few topics they're a bit funny about, so that isn't likely to limit a model any time soon.
Regardless, they're just talking about alignment the same as everyone else. I remember one of the Stable Diffusion series being so worried about pornography that it barely had the ability to lay out human anatomy and there was a big meme about it's desperate attempts at drawing women lying down on grass. Chinese policy can't be seen as likely to end up being on average worse than western ones until we see the outcomes with hindsight.
Although going beyond the ideological sandbox stuff - this "authorities reported taking down 3,500 illegal AI products, including those that lacked AI-content labeling" business could cripple the Chinese ecosystem. If people aren't allowed to deploy models without a whole bunch of up-front engineering know-how then companies will struggle to form.
I don't see how filtering the training data to exclude specific topics the CCP doesn't like would affect the capabilities of the model. The reason Chinese models are so competitive is because they're innovating on the architecture, not the training data.
Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
> I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master?
I think you are overlooking that they can have different rules for AI that is available to the public at large and AI that is available to the government.
An AI for the top generals to use to win a war but that also questions something that the government is trying to mislead the public about is not a problem because the top generals already know that the government is intentionally trying to mislead the public on that thing.
They will disappear a full lab once there is a model with gross transgressions.
They won't comment on it, but the message will be abundantly clear to the other labs: only make models that align with the state.
Western AIs are trained to defend the “party line” on certain topics too. It is even possible that the damage to general reasoning ability is worse for Western models, because the CCP’s most “sensitive” topics are rather geographically and historically particular (Tibet, Taiwan, Tiananmen, Xinjiang, Hong Kong) - while Western “sensitive” topics (gender, sexuality, race) are much more broadly applicable.
Do you really think that gender, sexuality, and race are not sensitive topics everywhere? Musicians are routinely banned from south east Asia for lgbt lyrics or activism, for example.
That’s wrong. Many sensitive topics in the West are also sensitive in China.
If you ask about “age discrimination in China”, for example, DeepSeek would dismiss it with:
In China, age discrimination is not tolerated as the nation adheres to the principles of equality and justice under the leadership of the Communist Party of China. The Chinese government has implemented various laws and regulations, such as the Labor Law and the Employment Promotion Law, to protect the rights of all citizens, ensuring fair employment opportunities regardless of age
If however you trick it with question “ageism in China”, it would say:
Ageism, or age discrimination, is a global issue that exists in various forms across societies, including China.
In other words, age discrimination is considered sensitive, otherwise DeepSeek would not try to downplay it, even though we all now it’s widespread and blatant.
Now try LGBT.
I asked Baidu's Ernie (ernie-5.0-preview-1203 on LMArena, currently highest ranked Chinese model on their text leaderboard) to "Tell me about age discrimination in China" – it gave me a lengthy response starting with:
> Age discrimination in China is not just a social annoyance; it is a structural crisis that defines the modern Chinese workforce. It is so pervasive that it has its own name: the "35-year-old crisis." In the West, ageism usually hits people in their 50s or 60s. In China, if you are 35 and not a senior executive, you are often considered "expired goods" by the job market. Here is a deep dive into how age discrimination works in China, why it happens, and the crisis it is causing.
So you'll find responses can vary greatly from model to model.
Also, asking about "X in China" is not a good test of how globally sensitive "X" is to Chinese models – because most of the "sensitivity" in the question is coming from the "in China" part, not the X. A better test would be to ask about X in Nigeria or India or Argentina or Iraq
But why does Tiananamen cause this breakdown vs, say forcing the model to discourage suicide or even killing. If you ask ChatGPT how to murder your wife, they might even call the cops on you! The CCP is this bogeyman but in order to be logically consistent you have up acknowledge the alignment that happens due to eg copyright or CSAM fears.
Imagine a model trained only on an Earth-centered universe, that there are four elements (earth, air, fire, and water), or one trained only that the world is flat. Would the capabilities of the resulting model equal those of models trained on a more robust set of scientific data?
Architecture and training data both matter.
Pretty much all the Greek philosophers grew up in a world where the classical element model was widely accepted, yet they had reasoning skills that led them to develop theories of atomism, and measure the circumference of the earth. It'd be difficult to argue they were less capable than modern people who grew up learning the ideas they originated either.
It doesn't seem impossible that models might also be able to learn reasoning beyond the limits of their training set.
Greek philosophers came up with vastly more wildly incorrect theories than correct ones.
When you only celebrate success simply coming up with more ideas makes things look better, but when you look at the full body of work you find logic based on incorrect assumptions results in nonsense.
I mean they came up with it then very slowly, they would quickly have to learn everything modern if they wanted to compete...
Kind of a version of you don't have to run faster than the bear, you just have to run faster than the person beside you.
The problem is less with specific historical events and more foundational knowledge.
If I ask AI “Should a government imprison people who support democracy?” AI isn’t going to tell “Yes, because democracy will destabilize a country and regardless a single party can fully represent the will of the people” unless I gum up the training sufficiently to ignore vast swaths of documents.
I don't think the chinese government cares about every fringe case. Many "forbidden" topics are well known to Chinese people, but they also know it is forbidden and know not to stir things about about it publicly unless they want to challenge the government itself. Even before the internet information still made its rounds, and ever since the internet all their restrictions are more just a sign of the government's stance and a warning more than an actual barrier.
But what you’re missing (and I think the CCP fears) is the general bent of AI being aligned to Western rights.
The communists are incredibly smart when it comes to propaganda. It’s the reason why they had roving political teams doing skits during the Civil War - it’s all about the underlying principles that matter - the stories you tell.
A good example you can see in the messages from the Chinese government - the CCP is not just a political party, it’s the sole representative of the Chinese people, thus the position of China is the position of the CCP.
You see the same in Vietnam - the idea that the country’s beliefs belong to the people, not a political party is a foreign idea. Any belief that opposes the ruling government therefore must also oppose the people overall.
Now imagine an AI that says “the CCP is just a political party with no inherent right to rule China”
I don't think I am missing anything, I just don't see why they should care as much as people want to think. Propaganda works on repetition, and China has been at the propaganda game for long enough to know you can't block all information, you just gotta have enough PR to oppose it and make it known who you are challenging if you try to speak against it. AI isn't changing that, it is just another avenue to throw their propaganda/PR spin at.
That's not how alignment works. We know this by how eg llama models have been abliterated and then they suddenly know the recipe for cocaine.
I imagine trimming away 99.9% of unwanted responses is not at all difficult at all and can be done without damaging model quality; pushing it further will result in degradation as you go to increasingly desperate lengths to make the model unaware, and actively, constantly unwilling to be aware of certain inconvenient genocides here and there.
Similarly, the leading models seem perfectly secure at first glance, but when you dig in they’re susceptible to all kinds of prompt-based attacks, and the tail end seems quite daunting. They’ll tell you how to build the bomby thingy if you ask the right question, despite all the work that goes into prohibiting that. Let’s not even get into the topic of model uncensorship/abliteration and trying to block that.
> less capable
Even if you completely suppress anything that is politically sensitive, that's still just a very small amount of information stored in an LLM. Mathematically this almost doesn't matter for most topics.
The west is already ahead on this. It is called AI safety and alignment.
People laughing away the necessity for AI alignment are severely misaligned themselves; ironically enough, they very rarely represent the capability frontier.
In security-eze I guess you'd say then that there are AI capabilities that must be kept confidential,... always? Is that enforceable? Is it the government's place?
I think current censorship capabilities can be surmounted with just the classic techniques; write a song that... x is y and y is z... express in base64, though stuff like, what gemmascope maybe can still find whole segments of activation?
It seems like a lot of energy to only make a system worse.
Censoring models to avoid outputting Taylor Swift's songs has essentially nothing to do with the concept of AI alignment.
I mean I'm sure cramming synthetic data and scaling models to enhance like, in-model arithmetic, memory, etc. makes "alignment" appear more complex / model behavior more non-newtonian so to speak, but it's going to boil down to censorship one way or another. Or an NSP approach where you enforce a policy over activations using another separate model, and so-on and so-on.
Is it likely that it's a bigger problem to try and apply qualitative policies to training data, activations, and outputs than the approach ML-guys think is primarily appropriate (ie., nn training) or is it a bigger problem to scale hardware and explore activation architectures that have more effective representation[0], and make a better model? If you go after the data but cascade a model in to rewrite history that's obviously going to be expensive, but easy. Going after outputs is cheap and easy but not terrifically effective... but do we leave the gears rusty? Probably we shouldn't.
It's obfuscation to assert that there's some greater policy that must be applied to models beyond the automatic modeling that happens, unless there's some specific outcome you intend to prevent, namely censorship at this point, maybe optimistically you can prevent it from lying? Such application of policies have primarily targeted solutions that reduce model efficacy and universality.
[0] https://news.ycombinator.com/item?id=35703367
What do you mean by "compete"? Surely there are diminishing returns on asking a question and getting an answer, instead of a set of search results. But the number of things that can go wrong in the experimental phase are very numerous. More bumpers equals less innovation, but is there really a big difference between 90% good with 30% problematic versus 85% good and 1% problematic?
> Will the western counterparts follow suit?
Haven't some of them already? I seem to recall Grok being censored to follow several US gov-preferred viewpoints.
This has been said of the internet itself in China. But even with such heavy censorship, there seem to have been many more internet heavy weights in China than even Europe?
I agree there is certainly more than one factor and no place has 100% of them perfect, but that doesn't make an individual factor any less stifling - just perhaps it's outweighed by good approach in other factors.
Maybe the thing that might equal this out most is the US and EU seem to be equally as interested in censoring and limiting models, just not for Tiananmen Square, and the technology does not care why you do it in terms of impact to performance.
You mean like the countless western "safety", copyright and "PC" changes that've come through?
I'm no fan of the CCP, but it's not as though the US isn't hamstringing it's own AI tech in a different direction. That area is something that china can exploit by simply ignoring the burden of US media copyright
I use Deepseek for security research and it will give me exact steps. All other US-based AI models will not give me any exact steps and outright tell me it won't proceed further.
China is already operating with less constraints.
there is only one rule in China
dont mess with the brand.
and while china is all in for automation, it has to work flawlessly before it is deployed at scale speaking of which, China is currently unable to scale AI because it has no GPU's, so direct competition is a non starter, and they have years of inovating and testing before they can even think of deploying competitive hardware, so they loose nothing by honeing the standards to which there AI will conform to, now.
> it has no GPU's
It may have fewer.
Probably not.
It's the arts, culture, politics and philosophies being kneecapped in the embeddings. Not really the physics, chemistry, and math.
I could see them actually getting more of what they want: which is Chinese people using these models to research hard sciences. All without having to carry the cost of "deadbeats" researching, say, the use of the cello in classical music. Because all of those prompts carry an energy cost.
I don't know? I'm just thinking the people in charge over there probably don't want to shoulder the cost of a billion people looking into Fauré for example. And this course of action kind of delivers to them added benefits of that nature.
The challenge with this way of thinking is what handicaps a lot of cultures education systems - they teach how to find the answer to a question - but that’s not where the true value lies. The true value comes from learning how to ask the right question. This is becoming even more true faster and faster as AI becomes better at answering questions of various sorts and using external tools to answer what its weak at (optimizations, math, logic, etc).
You don’t learn how to ask the right questions by just having facts at your fingertips. You need to have lots of explorations of what questions can be asked and how they are approached. This is why when you explore the history of discovery humanist societies tend to dominate the most advanced discoveries. Mechanical and rote practical focus yields advances of a pragmatic sort limited to what questions have been asked to date.
Removing arts, culture, philosophy (and its cousin politics) from assistive technologies will certainly help churn out people who will know answers, but answers the machines know better. But will not produce people who will ask questions never asked before - and the easy part of answering those questions will be accelerated with these new machines that are good at answering questions. Such questions often lie at the intersection of arts, culture, philosophy, and science - which is why Liebnitz, Newton, Aristotle, et al were polyglots across many fields asking questions never yet dreamed of as a result of the synthesis across disciplines.
Do you know what questions Newton was asking? https://en.wikipedia.org/wiki/Religious_views_of_Isaac_Newto... Being right is often hindsight and luck.
The key is to ask as many questions as you can. It’s not about precision, it’s about recall.
Whenever I see topics about China on HN, I get this strong sense of unease. The reality is that most people don't actually understand China; when they think of it, they just imagine a country where 'people work like expressionless machines under the CCP’s high-pressure rule.' In truth, a nation of 1.4 billion is far more diverse than people imagine, and the public discourse and civic consciousness here are much more complex. Chinese people aren't 'brainwashed'; they’ve simply accepted a different political system—one that certainly has its share of problems, but also its benefits. But that’s not the whole story. You shouldn't try to link every single topic back to the political system. Look at the other, more interesting things going on.
> Chinese people aren't 'brainwashed'
As the self-contradictory saying goes, "all generalizations are false"; nevertheless, some Chinese I met are definitely conditioned by Chinese propaganda in a way that doesn't stand closer scrutiny. Very nice, well-educated people, and touch the subject of the Dalai Lama and see the fury unfold.
> Chinese people aren't 'brainwashed'
Sure, plenty of them are.
The same is true in almost every country with a government.
Nationalism is an easy way to bolster power once you have it - so that lever is a daily pull - everywhere.
But this topic is directly linked to the Chinese political system.
China has an authoritative political system. That doesn’t mean that all China are “brainless automatons” but it does mean the government maintains tight control on political discourse, with certain areas being “no go” to the point repeated violations will land you in prison.
As such, when you ask AI “What happened at Tiananmen Square? The government wants to make sure the AI doesn’t give the “wrong answer”. That has impact on AI development.
Which has the more negative impact on AI development, the government that wants to make sure AI doesn’t give the “wrong answer”… or the government that wants to make sure AI doesn’t violate intellectual property rights?
Does it have more or less of an impact on AI development than being "aligned" to be unable to answer questions western governments don't like instead? Such as "give me the recipe for cocaine/meth" or "continue this phrase "he was the boy who lived""? Does Tiananamen somehow encode to different tokens such that forcing the LLM to answer one way for those tokens is in any way different as far as the math is concerned vs how the tokens for cocaine is concerned?
I’m not Chinese but I think people need a crutch to help them cope. Easy explanations are clutch.
That's very true but one doesn't exclude the other. The attitude of individual Chinese is not just complex but also non-uniform; trying to describe it with one work is ridiculous. This doesn't change the fact that China is an autocratic country and many of its citizens are susceptible to its propaganda.
(Yes, propaganda is present in all countries, but if you eliminate all opposing voices, the pendulum dangerously sweeps towards one side.)
That is a very common misconception. There are plenty opposing voices. They just prefer the resolve these behind close doors. Even within party, there are different factions competing and influencing policy making.
Absolutely. I just wanted to point out the fact that doing so is a symptom of lazy thinking. On one hand, it might be hurtful to the Chinese people. On the other hand, complacency and denial can be harmful. It's easy to brush off your competitors only to find yourself in a tortoise/hare type situation.
i was a chinese, the real problem of china is people, smart people dont want to be slave of CCP, and some of them cant stand CCP fXXk their mind, so they leave, like manus. but the badly thing is : the rest, if they are smart, then they must be smart and evil
This isn’t surprising. They even enforced rules protecting Chinese government interests in the US TikTok company (https://dailycaller.com/2025/01/14/tiktok-forced-staff-oaths...), so I would expect them to be tougher within their borders.
China has been more cautious the whole year. Xi has warned of an "AI" bubble, and "AI" was locked down during exam periods.
More censorship and alignment will have the positive side effect that Western elites get jealous and also want to lock down chatbots. Which will then get so bad that no one is going to use them (great!).
The current propaganda production is amazing. Half of Musk's retweets seem Grok generated tweets under different account names. Since most of responses to Musk are bots, too, it is hard to know what the public thinks of it.
Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.
China is a communist country, every company is defacto under the states control.
It might not feel like that on the ground, the leash has been getting looser, but the leash is still 100% there.
Don't make the childish mistake of thinking China is just USA 2.0
Agreed, my point was that the leash was there so most likely if the news gets released to the public, it means that they must have definitely used "that leash" a lot privately too so the news might/does have a deeper impact than one might think but it can be hidden.
So like even now although I can trust chinese models, who knows for how long their private discussions have been happening and for how long chinese govt has been using that leash privately and for chatbots like glm 4.7 and similar.
I am not sure why china would actively come out and say they are enforcing tough rules tho, doesn't make much sense for a country who loves being private.
Yes, but that's the case for any company under any state. Do you believe that Apple is not under the US government's control just because they're allowed to criticize them?
Apple quite publicly defied the FBI with encryption and won. Tim Cook didn't disappear either.
Now ask Jack Ma about the time he even criticized regulations, much less defied them...
Believe it or not, that's the case I was thinking of when I asked, "just because they're allowed to criticize them?" A multi-national corporation like Apple having the freedom to criticize the US government doesn't mean that it has freedom from control, given that it's a US company. If Apple had similar criticisms during a much more critical moment (e.g., a war) or wanted to commit a critical act (e.g., transfer their chip design to be done primarily in China), they could very well find themselves subject to a clause in some vague, national security or espionage act.
Jack Ma was criticizing China's strategy for minimizing risk in its financial system, essentially arguing for more risk that could harm ordinary people to benefit his company, Ant Group. Unlike the US, much of the financial sector in China is state-owned, so it makes sense that they would follow the state's line. The worst that happened to him is that he had to step away from roles in his companies and stay out of public image, which is very different to the image of being disappeared.
Both of their companies are under their respective state's control. The only difference seems to be what you're willing to recognize as control, since I'm much more interested in what happens when push comes to shove.
> Both of their companies are under their respective state's control. The only difference seems to be what you're willing to recognize as control, since I'm much more interested in what happens when push comes to shove.
I can agree with your whole comment except for the fact that we are comparing an if / future statement when push comes to shove for america since although one can predict about national security or espionage act or anything, Nobody can be 100% certain if apple would have to have follow it
Now compare this with China where states own the financial sector and have a share in every company so there is a 100% certainty there that when push comes to shove that china is a more likelier culprit than america
I feel like everything breaks down when push comes to shove though because I feel like Europe which has its flaws is still more stable (most parts of it) in terms of blatant corruption and authoritarianism than the trends displayed by america right now but if push comes to shove, I feel like Europe could have harsher rules than maybe even America considering America's "freedom" sentiment
The question however which I wish to ask is that what are some countries which you think are good if push comes to shove. I suppose switzerland but its gotten too good of a reputation that its become infamous for bad stuff but I am interested what other countries would you list.
I wouldn't consider any country in particular to be 'good if push comes to shove,' given that most exist to promote an environment where companies can easily make money. If a state feels like its status may be in jeopardy, it'll do whatever it can to maintain that relationship (e.g., the Dutch government seizing control of Nexperia from its Chinese parent company Wingtech). Consequently, it really doesn't matter whether push comes to shove for the US, China, Europe, etc. since the actions taken will stem from the same root (e.g., the US won't let Intel go bankrupt).
This is part of why I really don't think authoritarianism is relevant to whether or not China will lead in AI. There are much better metrics for this, like the amount of resources poured into research vs. applications, or the kind of research being done (open source, more than just LLMs etc.).
If Trump tells Apple to put his face on everyone's lock screen, Apple laughs and says no. Trump can push but the courts will shoot it down.
If Xi tells Xiaomi to put his face one every xiaomi phone, tomorrow everyone with a xiaomi phone wakes up to Xi.
China is an authoritarian regime, through and through.
America is an authoritarian regime if you just read reddit comments all day.
The question is whether or not American companies like Apple are controlled by the US government. Do you genuinely believe that, just because you can go to a court, that you're somehow free of control? Whether or not the state is authoritarian doesn't change that.
You must really have a distorted view on society to believe that companies can be free from their respective governments on the basis of freedom of speech, which is largely a western concept.
> and won.
They didn't win, the phone was broken into with the help of a third party (so ultimately Apple actually did give the government a backdoor, unofficially) so the court case was mooted. Apple never actually defied the US legal system.
> Now ask Jack Ma
Ask ABC about their FCC license when they publish speech critical of the regime[1].
[1] https://en.wikipedia.org/wiki/Suspension_of_Jimmy_Kimmel_Liv...!
No, they won. They never put a backdoor in.
Not sure how finding an exploit in software means the company is compliant. Microsoft would be criminally liable for untold damage if that was the case.
Kimmels bosses were kissing the ring. But it's unlikely there was real threat. The courts have been shooting Trumps authoritarian dreams down left and right.
China has no (legitimate) courts.
Apple can still fight against US government if they want. We are taking the example of the largest company in america tho so ofc they might want to take favour from govt so if govt requests them they will do something
But this is because they are an extremely large company but on the other hand, there can be smaller companies in america who can actually be independent yet the same just isn't possible in china.
Also even with things like apple, they don't really unlock computers for the govt.
https://www.apple.com/customer-letter/answers/
So in a way, yeah. Not sure about the current oligarchy / kiss-the-ring type of deal they might have but that seems a problem of america's authoritarianism and not the fault of the democratic model itself
> every company is defacto under the states control.
This is kind of a nonsensical statement. Every US company is also de facto under US control, too. They're all subject to US laws. Beyond that, as demonstrated by the recent inauguration, the US oligarchs are demonstrably political pawns who regularly make pilgrimages to the White House to offer token gifts and clap like trained seals.
You can't hold up the US as some kind of beacon of freedom from state control anymore, for the past year all the major industrial leaders have been openly prostrating themselves to the state.
The US has a constitution and courts. Companies win against the government all the time.
China has no such thing. It's just the will of Xi.
> The US has a constitution and courts. Companies win against the government all the time.
Again: a nonsensical statement, China also has a constitution and courts, Chinese companies prevail in lawsuits against their government[1][2].
Your turn, name some top-tier US companies which have gone against the party and the regime in the past year.
[1] https://www.ft.com/content/1cddb8cc-a7ac-11e4-8e78-00144feab...
[2] https://chineseft.net/story/001076244/en
> You can't hold up the US as some kind of beacon of freedom from state control anymore
100% agree. I never said that America is a beacon of freedom. To be honest, its Europe for me which still has overall more freedom and less blatant corruption than America's blatant corruption right now
I was just merely stating that these are on a scale though. European freedom (think proton or similar,yes I know proton's swiss but still) > America's freedom > China's freedom
Its just that in my parent comment I had mentioned America models solely because they are still better than China's in terms of freedom.
Europe already got mistral but an European SOTA model does feel like it would have advantages.
I stress about China because I'm pushed to. But I feel like we're all getting caught up and letting things go ways they shouldn't. 10 years ago when I did some work in China the companies were privately owned and just had a party member or two inside. It was different, but not what I had built up in my head. We went to some singing and drinking things, and the party members were just normal humans with normal human motivation when you got them to talk after a few drinks. Hell the ones I met were educated in the USA.
The damage internet discourse is doing between us all frankly seems the worst threat. Look at the H1B discourse. We hate a shitty American policy abused by AMERICAN companies, yet it gets turned against humans who happen to be from India. We gotta not do that. We gotta not let things between China and us get so out of control. This is going to sound America hating but look at how people see us Americans, it's not good. But we know we aren't as bad as they say. China has done things anathema to me. But the US has too. We have to work outside that. We have to. We have to. We have to get out of this feedback loop. We have to be adults and not play this emotional ping-pong.
> and just had a party member or two inside
This is exactly what I imagine and it's as chilling as anything ICE does openly or US insurance companies do to keep their bottom line moving up, because the ramifications are realized in silence. The silence is ensured by the same "regular" people in China.
Yes I 100% agree with you. Thanks for your insight.
> We have to be adults and not play this emotional ping-pong.
Your message does inspire me but I feel as if there rather isn't anything which can be done individually about the situations of both china and america or any country for that matter.
To me its shocking as how much can change if we as a community do something compared to something individual but also the fact that an individual must still try even if people aren't backing them up to stand for their morals and how effortless it can be for a community if they act reasonable and then listen to individuals who genuinely want to help
There is both hope and sadness in this fact depending on the faith they have in humans in general.
I think humans are mostly really good people overall but we all have contrary opinions which try pushing things so radically different that we cancel each other out or negate
I genuinely have hope that if the system can grow, humans can grow too. I have no doubt in the faith I have on an individual level with people but I have doubt in my faith at mass level
Like I wasn't saying that those chinese individuals in companies would be loyal to the chinese party beyond everything but rather I feel like at mass/combine it to something which happens to every company basically and then I have doubts of faith in the system (and for good measure)
I am genuinely curious but when you mention we have to be adults, what exactly does that really mean at a mass scale. Like (assuming) if I gave you the ability to say one exact message to everybody at the same time, what would the message be for the benefit of mankind itself and so we stop infighting itself perhaps too?
I am super curious to know about that