ksnip 20250114 061109

Events, and adventures define our living experience

Oh, *here we go*—OpenAI’s latest whine-fest about Chinese companies “stealing” their AI models. Let me break this down for you like a sledgehammer to a porcelain teacup.

First off, **OpenAI’s crying foul because DeepSeek, a Chinese startup, allegedly “distilled” their models**. Translation: *“Wahhh, someone’s using our fancy AI juice to make their own cheaper, open-source moonshine!”* . But hold on—distillation isn’t some shadowy hacker move. It’s a standard practice in AI, like using a recipe to reverse-engineer your grandma’s cookies. Even experts admit this shit’s widespread . So why’s OpenAI acting like DeepSeek invented corporate espionage? Maybe because their $100-million baby just got outsmarted by a startup that spent *pocket change* .

Let’s talk about **DeepSeek’s R1 model**. This thing’s MIT-licensed, runs on a *laptop*, and allegedly matches OpenAI’s “reasoning” models at a fraction of the cost . Oh, and it’s **open-weight**—meaning anyone can poke around its code. Meanwhile, OpenAI’s models are locked up tighter than Fort Knox, unless you’re paying $20/month to ChatGPT Plus. So yeah, no shit China’s playing fast and loose. When your tech is overpriced and gatekept, people *will* find workarounds .

But here’s the kicker: **OpenAI’s own models are a dumpster fire of Chinese spam data**. Remember GPT-4o’s token library? 90% of its Chinese tokens were porn and gambling sites . So while they’re clutching pearls about “IP theft,” their training data looks like a horny teenager’s browser history. Maybe focus on cleaning your own house before accusing others of burglary, eh?

And let’s not ignore the **geopolitical circus**. The U.S. is sweating bullets because China’s AI isn’t just catching up—it’s *leapfrogging* with open-source hustle. Nvidia’s stock tanked $589 billion in a day because DeepSeek’s efficiency threatens their GPU monopoly . Meanwhile, China’s like, *“Thanks for the sanctions, idiots—we’ll just build cheaper, better models without your fancy chips!”* . OpenAI’s sudden moral panic? Just the sound of a tech giant realizing it’s no longer the only kid on the block.

**Bottom line**: OpenAI’s tantrum reeks of hypocrisy. They’re mad because China’s playing the same game Silicon Valley invented—copy, iterate, dominate. But instead of innovating, they’re lobbying the U.S. government to “protect” their turf . Cry harder, guys. The future of AI isn’t locked in your premium API—it’s in open-source models that *actually* push boundaries. And if that scares you? Maybe you weren’t that invincible to begin with .

*Mic drop*.

Sir Whiskerton and the Case of the Ice Cream Van Escape

Ah, dear reader, prepare yourself for another rollicking adventure in the life of Sir Whiskerton, the farm’s most brilliant (and modest) detective. Today’s tale takes us beyond the farm’s borders to a strange land filled with wonders, dangers, and a rusty old ice cream van that becomes the center of a clucking, honking, and quacking catastrophe. What follows is a story filled with laughs, teamwork, and a moral that will leave you grinning like a goose in a garbage heap. So grab your sense of humor and let’s waddle into The Case of the Ice Cream Van Escape.


The Land Beyond the Farm

Just outside the farm lies a peculiar place—a strange land where Catnip’s farm, a large pond, and a neighborhood trash dump coexist in chaotic harmony. To the farm animals, this dump is more than just a pile of refuse; it’s their playground, their amusement park, their Disneyland of Debris. They go there to explore, scavenge, and occasionally get into trouble.

“It’s the most magical place on earth,” Doris the hen once said, her eyes sparkling. “Well, aside from the farmer’s feed bin.”

But on this particular day, the dump became the site of a honking disaster.


The Geese Get Stuck

The trio of geese—Gertrude, Gladys, and Gloria—had ventured to the dump in search of adventure. They waddled past broken bicycles, discarded tires, and a suspiciously large collection of mismatched socks until they stumbled upon an old, rusting ice cream van.

“Oh, look!” Gertrude honked, her eyes lighting up. “It’s a… a palace on wheels!

“A palace! But also so mysterious!” Gladys added.

“Mysterious! Oh, I can’t bear it!” Gloria screeched, fainting dramatically into a pile of old newspapers.

The geese, never ones to resist a mystery, decided to explore the van. They squeezed through the broken door and began honking excitedly as they discovered remnants of ice cream cones, sticky candy wrappers, and a freezer that smelled like a science experiment gone wrong.

But just as they were about to declare the van their new kingdom, the door slammed shut behind them.

“Oh no!” Gertrude honked, pushing against the door. “We’re trapped!”

“Trapped! But also so dramatic!” Gladys added.

“Dramatic! Oh, I can’t bear it!” Gloria screeched, fainting again for good measure.


The Farm Reacts

Back on the farm, the hens were in a tizzy.

“Oh, Sir Whiskerton!” Doris squawked, flapping her wings. “The geese are trapped in the ice cream van! It’s… it’s terrible!

“Terrible! But also so exciting!” Harriet clucked.

“Exciting! Oh, I can’t bear it!” Lillian screeched, fainting into a pile of hay.

Sir Whiskerton sighed, flicking his tail. “Alright, everyone, calm down. We’ll get them out. But first, we need a plan.”


The Rescue Team Assembles

Sir Whiskerton decided to assemble a rescue team, enlisting the help of some of the farm’s newest members: Barry the beaver, Ferdinand the duck, and even Count Catula (who insisted on joining for “dramatic effect”).

“Alright, team,” Sir Whiskerton said, addressing the group. “Here’s the plan: Barry, you’ll use your engineering skills to open the van. Ferdinand, you’ll provide moral support. And Catula… just try not to scare anyone.”

“Scare anyone?” Catula said, clutching his chest as if offended. “I am a creature of the night, Whiskerton. Scaring is what I do.”

“Just… stay out of the way,” Sir Whiskerton said, sighing.


The Rescue Mission

The rescue team made their way to the dump, where the geese’s honks could be heard echoing from inside the van.

“Help! Oh, someone help!” Gertrude honked.

“Help! But also so desperate!” Gladys added.

“Desperate! Oh, I can’t bear it!” Gloria screeched, fainting yet again.

Barry inspected the van, his beaver instincts kicking in. “Hmm,” he said, stroking his chin. “This door is rusted shut. But with a little… beaver magic, I can get it open.”

“Beaver magic?” Ferdinand said, tilting his head. “Is that like duck magic, but with more wood?”

“Exactly,” Barry said, grinning. “Now stand back.”

Barry got to work, gnawing at the hinges of the door with his powerful teeth. Meanwhile, Ferdinand quacked encouragement, and Catula delivered a dramatic monologue about the “eternal struggle of the trapped.”


A Happy Ending

After a few minutes of intense gnawing, the door finally creaked open. The geese tumbled out, honking with relief.

“Oh, thank you!” Gertrude said, flapping her wings. “You’ve saved us!”

“Saved us! But also so heroic!” Gladys added.

“Heroic! Oh, I can’t bear it!” Gloria screeched, fainting one last time.

Sir Whiskerton smiled, flicking his tail. “All in a day’s work. But next time, maybe stick to exploring things that aren’t rusted shut.”


The Moral of the Story

The moral of the story, dear reader, is this: Adventure is fun, but it’s important to be careful where you explore. And while it’s great to have a sense of curiosity, it’s equally important to have a plan—and a beaver with strong teeth—just in case things go wrong.

As for the geese? They learned their lesson and stuck to safer adventures, like waddling through the farmer’s garden. And Sir Whiskerton? He returned to his sunbeam, content in the knowledge that he had once again saved the day.

Until next time, my friends.

The End.

I’ve said this here before but finding someone when you are rich or wealthy (there is a difference) is easy is long as you are not the type of person who flashes their cash thinking that’s how to impress women. The guys will dress in really expensive clothes with a really expensive car and his character will not align with his real personality.

When I met my girlfriend, she thought I didn’t have a lot of money based on what other people she knew. I asked if it bothered her and she said it didn’t just something she had to deal with. Regardless, she still enjoyed me and we still continued to date. As we got serious, I told her that I indeed did have money.

We’ve been married for 19 years. She’s a bargain shopper while I’m the one who makes the larger purchases.

Bottom line, Don’t let money show who you are, let you show who you are

I asked this question to both ChatGPT and DeepSeek, with same settings (enables online search).

However, AI is still not competable with human beings, because the human analysts studied real technological issues, I will put the human researches at the end.

Here’s the AI answer,

1 ChatGPT,

It gives a short and pale answer, and it provided some fake information, like “government’s support”, the most ironical fact here is, the DeepSeek is owned by a private equity, who at first determined to develop an AI model to help them doing business in stock markets….LOL, and they occasionally found their new model won a lottery, it can really help them earn money from the market. (That is, the company didn’t receive government fund) Then they continued to develop their model until this most recent version drew a world wide attention,

China’s development of DeepSeek (a high-performing open-source LLM) with less than $50 million—compared to the billions the U.S. government and companies like OpenAI, Google, and Anthropic are investing—can be attributed to several factors:

1. Lower Compute & Operational Costs

  • China has cheaper access to computing power due to state-supported chip manufacturers and cloud computing providers like Alibaba, Baidu, and Huawei.
  • Electricity and data center costs are lower compared to the U.S.

2. Leveraging Open-Source Innovations

  • Unlike OpenAI and Google, which build everything from scratch, DeepSeek builds on open-source models like Meta’s LLaMA, Mistral, and earlier research.
  • This dramatically reduces R&D time and costs.

3. Talent Cost Advantage

  • AI engineers in China earn far less than their U.S. counterparts. A senior AI researcher in China might make $100K-$200K/year, while in the U.S., salaries often exceed $500K (plus stock options).
  • Many Chinese AI startups are founded by returnees from Google, Meta, and OpenAI, bringing back top-tier expertise at lower costs.

4. Government & Corporate Support

  • The Chinese government funds AI labs, cloud infrastructure, and universities to reduce the financial burden on private AI companies.
  • Tech giants like Alibaba, Tencent, and Baidu invest in AI startups, providing free resources and partnerships.

5. More Efficient AI Training Methods

  • DeepSeek and other Chinese LLMs use smarter data curation and efficient training algorithms to train models with fewer resources than OpenAI’s brute-force approach.
  • They focus on smaller, high-quality datasets rather than scaling up indiscriminately.

6. No Profit-Driven Bureaucracy

  • U.S. AI companies need to raise billions because they must pay massive salaries, maintain profitability, and satisfy investors.
  • Chinese firms can operate leaner, focusing purely on R&D without as much pressure for short-term profits.

So, Why Does the U.S. Need Billions?

Trump’s argument for billions in AI funding is based on the U.S.’s more expensive ecosystem:
✅ Higher talent costs
✅ More expensive compute & electricity
✅ Need to develop proprietary models (rather than leverage open-source)
✅ Investor-driven growth (more pressure for monetization)
✅ Regulatory overhead (data privacy, security, etc.)

Bottom Line

China’s AI startups are lean, efficient, and government-supported, allowing them to develop state-of-the-art AI at a fraction of the cost. The U.S., meanwhile, relies on massive capital investment and proprietary innovation, which inherently costs more.

Would you like a deeper comparison between Chinese and Western AI ecosystems?

2 DeepSeek (some numbers at the end of a sentence are the footnotes, just ignore it when you read)

It provides a mind-flow of how it thinks (before the first dot-initialled paragpraph). Then it finds out the mistake in the question, it is not 50 million, it is 5.91 million. It also provided some information about their hardwares and pointer how their price is formed (in section 5.2).

Okay, the user is asking how China managed to develop DeepSeek with less than $50 million when Trump claims billions are needed. Let me go through the search results to find the key points.

First, webpage 1 mentions that DeepSeek used $6 million for their model, which is way below the user’s mentioned $50 million. They used Nvidia H800 chips, which are less powerful than the H100s but still effective. Also, they employed techniques like model distillation, where a larger model trains a smaller one, making the process cheaper. That’s a big cost saver 1.

Webpage 3 and 5 discuss compute efficiency. The company used only 2,000 chips compared to Meta’s 16,000, which significantly cuts costs. They focused on data quality and model architecture improvements rather than just scaling up. This approach is more efficient and less resource-heavy 49.

Webpage 7 adds that DeepSeek’s founder, Liang Wenfeng, had stockpiled Nvidia A100 GPUs before the export ban. Having access to these chips before restrictions helped them train their models without the latest hardware 13.

Also, geopolitical factors played a role. U.S. export restrictions forced Chinese companies to innovate with limited resources, leading to more efficient methods. Necessity drove them to find workarounds, like distillation and better algorithms 14.

Open-source collaboration is another point from webpage 2. By open-sourcing their model, they leveraged community contributions, which reduces development costs and accelerates progress 3.

Financial backing from High-Flyer, a hedge fund, provided initial capital without needing massive investments. This allowed them to focus resources efficiently 113.

The market reaction in webpage 4 and 5 shows that analysts are skeptical but acknowledge that efficiency gains can lower costs. Techniques like mixed precision training and model specialization help reduce expenses further 9.

Putting it all together, the combination of efficient algorithms, strategic chip stockpiling, geopolitical necessity, open-source collaboration, and existing financial resources allowed DeepSeek to develop their AI model at a fraction of the cost. This challenges the notion that billions are necessary, showing that innovation and constraints can drive efficiency.

  • China’s development of the DeepSeek AI model with less than
  • 6 million for compute costs—stands in stark contrast to the multibillion-dollar investments typical of U.S. AI firms. This achievement reflects a combination of strategic innovation, geopolitical constraints, and resource optimization. Here’s how it was accomplished:

1. Efficient Training Techniques and Algorithmic Innovation

  • Model Distillation: DeepSeek used a process called distillation, where a smaller model is trained using knowledge from a larger, pre-existing model. This reduces computational demands while maintaining performance 14.
  • Focus on Data Quality: Instead of relying solely on scaling up compute power, DeepSeek prioritized high-quality training data and refined model architectures. This approach minimized redundant computations and improved efficiency 9.
  • Reasoning Models: The R1 model introduced “chain-of-thought” reasoning, allowing it to solve complex problems with fewer computational steps compared to brute-force scaling 413.

2. Leveraging Cost-Effective Hardware

  • Use of Older Chips: DeepSeek trained its models using Nvidia H800 GPUs, which are less advanced than the H100s restricted by U.S. export controls. These chips were acquired before stricter bans took effect in late 2023 113.
  • Efficient Resource Allocation: The company utilized only 2,000 chips for training, far fewer than U.S. counterparts like Meta (16,000+ chips). This reduced both hardware costs and energy consumption 49.

3. Geopolitical Necessity Driving Innovation

  • U.S. semiconductor restrictions forced Chinese firms to optimize existing resources. As Perplexity CEO Aravind Srinivas noted, “Necessity is the mother of invention”—constraints spurred creative solutions like mixed-precision training and algorithmic tweaks 14.
  • DeepSeek’s founder, Liang Wenfeng, stockpiled GPUs before bans, ensuring access to critical hardware 13.

4. Open-Source Collaboration and Transparency

  • By open-sourcing its models, DeepSeek tapped into global AI communities for feedback and improvements, reducing development costs. This contrasts with the closed-door R&D of U.S. giants like OpenAI 313.

5. Strategic Financial Backing

  • DeepSeek originated from High-Flyer, a $8 billion hedge fund, providing initial capital and infrastructure without requiring massive external funding 113.
  • Costs were further minimized by excluding expenses like staff salaries and failed experiments from the $6 million compute figure, focusing only on marginal training costs 39.

Why Trump’s “Billions” Narrative Clashes with Reality

U.S. firms like OpenAI and Meta invest heavily in scaling models (e.g., GPT-4 reportedly cost over $100 million), but DeepSeek’s success highlights that efficiency—not just spending—can drive breakthroughs. Analysts note that U.S. companies may over-rely on compute power, while Chinese firms innovate under constraints 913. However, experts caution that DeepSeek’s cost estimates exclude long-term R&D and infrastructure, suggesting the gap may not be as stark as it appears 39.

In summary, DeepSeek’s achievement underscores a paradigm shift in AI development: geopolitical constraints and algorithmic ingenuity can rival sheer financial firepower. While the U.S. retains advantages in cutting-edge hardware, China’s progress signals a more competitive global landscape 413.

From my view, I think DeepSeek’s answer is more useful and reliable than the ChatGPT.

Human researches

A study from S. Korea showed DeepSeek modified the CUDA processors, making some of them from conducting pure calculation to inter-communications, which accelarates the processing speed.

Someone explained DeepSeek optimized at PTX (also appeared in the last pic) level

And here’s the evidence to “DeepSeek is a private equity owned company”—— “quant traders”.

(Visited 66 times, 24 visits today)
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x