- Uncovering AI
- Posts
- đ¸ Sora 2 Tightens Rules and Opens the Door for Creators
đ¸ Sora 2 Tightens Rules and Opens the Door for Creators
Nvidia builds trillion-dollar AI factories, OpenAI reshapes Sora for creators, Samsungâs tiny model outthinks giants, and ChatGPT takes a personal turn.

My fellow AI explorers
Nvidia became the worldâs most valuable company, gold hit $4,060 an ounce, and yet, AI is the real gold rush. Jensen Huang is now steering a $4.5 trillion empire at the center of a $2 trillion AI future. Open AI tightened Soraâs copyright rules and opened its API floodgates, while a 7M-parameter model from Samsung quietly beat some of the worldâs top LLMs at reasoning.
In todayâs edition:
đ° Nvidiaâs $4.5T reign and the coming âAI factoryâ era
đŹ Sora 2âs new rules and what it means for creators
đ§ A 7M-parameter upstart that just outreasoned Gemini 2.5 Pro
Must See AI Tools
đ° Payman: AI That Pays Humans. Over 10,000+ signed up for the beta
đŤ SubMagic: An AI tool that edits short-form content for you! (Get 10% off using code âuncoveraiâ at checkout)
đ¤ 11Labs: #1 AI voice generator (Click Here to get 10,000 free credits upon signing up!)
đ¤ ManyChat: Automate your responses & conversations on IG, FB and more! (Click Here to get first month for free)
đď¸ Syllaby: The only social media marketing tool youâll ever need - powered by AI! (Get 25% off the first month or any annual plan with code âUNCOVERâ at checkout)
Open AI
API pricing, tighter moderation, and the road to creator economy
Sora 2 adds an API (~$0.10/sec of generation) and updates copyright controls after the initial anything-goes frenzy.
Key takeaways:
API = velocity: programmatic pipelines for storyboards, pre-viz, ad variants.
Rights holder controls: more stringent IP handling; expect denials where shows/celebs used to pass.
Monetization hints: OpenAI teases ways for rights holders/creators to participate (details pending).
Reality check for teams: budgeting by seconds makes iteration strategy crucial. Youâll storyboard tighter, use reference frames, and chain edits to avoid starting from scratch.
đŽ Prediction: Sora will bifurcate: brand-safe commercial mode (clearable assets, provenance, watermarks) and experimental creator mode (looser, but sandboxed). Tools that manage asset rights state will become table stakes in video pipelines.
FoundersâŚ
đ¤ Need an AI Agency to Help Your Business Implement AI Solutions?
Our preferred partner Align AI provides you with an expert AI and Automation implementation team to add 10-40+ hours of increased productivity per employee and achieve your goals faster.
Jensen Huang
Gold may be flirting with $4,060 an ounce, but the real rush isnât undergroundâitâs in silicon. Global spending on AI is projected to hit $1.5 trillion this year, and if the trend continues, $2 trillion by 2026. At the center of it all? Jensen Huang, the visionary CEO of Nvidia, which just became the worldâs most valuable company with a $4.5 trillion market cap.
In a new interview on CNBC, Huang outlined how Nvidiaâs early bets on full-stack AI infrastructureâchips, networking, and softwareâcemented its dominance, and why the companyâs partnership with OpenAI signals a new phase for both sides.
Hereâs what stood out:
Direct partnership with OpenAI: For the first time, Nvidia will sell systems directly to OpenAIâbeyond Azure and Oracle Cloud intermediaries.
AI factories on the horizon: Huang predicts each âgigawatt-scale AI factoryâ will require $50â60 billion in infrastructure, with Nvidia supplying everything from GPUs to networking and software.
Equity opportunities: Nvidia will have the chance to invest in OpenAI alongside other backers as the company moves toward self-hosted hyperscaling.
When asked about AMDâs new OpenAI deal, Huang didnât hold backâcalling it âclever but surprising,â noting that AMD gave away 10% of its company before building its next chip line. Nvidiaâs edge, he stressed, comes from controlling the entire AI stack, allowing for yearly leaps in performance while Mooreâs Law stagnates.
đŽ Prediction: Nvidia isnât just selling chipsâitâs quietly building the foundation for an AI-powered industrial revolution. Expect âAI factoriesâ to become the next trillion-dollar infrastructure category, powering not just data centers, but the entire digital economy.
AI SaaS Founders
đ¨Want Millions of Impressions For Your AI SaaS, Done For You?

At uncovernews.co, we specialize in getting AI SaaS products the attention they deserve through strategic influencer marketing campaigns designed to drive millons of impressions at the fraction of the cost!
Get Your AI Startupâs News or Product In Front of Millions Quickly
LLMâs
Small model, big brain. A single-author paper from Samsung MontrĂŠal introduces the Tiny Recursive Model (TRM)âa 7M-parameter, 2-layer network that iteratively refines its own answers and posts ~45% on ARC-AGI-1 and ~8% on ARC-AGI-2, surpassing several frontier LLMs (e.g., Gemini 2.5 Pro, o3-mini, DeepSeek-R1) on these hard reasoning benchmarksâwith <0.01% of their size.
Whatâs new (and why it matters):
Recursion over scale: TRM replaces heavyweight chain-of-thought + massive parameter counts with a light loop: propose â critique â revise â repeat. That âvirtual depthâ boosts generalization on symbolic/visual puzzles.
Clean break from HRM: It simplifies the earlier Hierarchical Reasoning Model (HRM) (two interacting RNNs with deep supervision), matching or beating it while being smaller and easier to reason about.
SOTA-ish puzzle skills: Reported jumps include Sudoku-Extreme 55%â87% and Maze-Hard 75%â85%, alongside the ARC gains above. Code and results are public.
How it works (at a glance):
Keep two memories: the current guess and a reasoning trace.
Unroll recursion a fixed number of steps; each step updates both.
Stop when the refinement converges or timeâs upâreturn the best candidate.
This yields âdepth via loops,â not layersâavoiding overfitting that appeared when stacking more layers.
Context & comparisons:
HRM showed that tiny models with deep supervision and multi-timescale recurrences can crush puzzles like Sudoku and mazes; TRM argues you donât need the hierarchiesâjust an efficient recursive updater.
On ARC-AGI, TRMâs ~45/8 (v1/v2) beats many large LLMs under the paperâs eval protocol; only specialized âthinkingâ behemoths remain aheadâat vastly higher cost.
đ§ Try it / read it:
Paper (arXiv) ⢠Code (Samsung SAIL MontrÊal) ⢠Explainers (Marktechpost, Forbes)
đŽ Prediction: Weâre seeing a new scaling axis: recursion depth at test time. Expect hybrid systems that pair small, specialized solvers like TRM with larger plannersâgetting frontier-level reasoning on devices without frontier-level footprints.
30-Second AI Play
đ¤ Make ChatGPT answers feel like they know you (without chaos)
Youâve got four knobs. Use them deliberately:
1) Memories (fastest start):
Settings â Personalization â turn Memories on. As you chat, it auto-saves relevant facts.
Do: let it learn stable preferences (tone, formats, recurring goals).
Donât: let personal quirks bleed into work. Periodically prune.
2) Projects (best practice):
Create a Work and a Personal project.
Turn on project memories.
Add project instructions (voice, constraints, sources).
Upload evergreen files (brand guide, ICP, KPIs).
Result: each chat inherits only whatâs relevant to that context.
3) Custom Instructions (manual mode):
If you hate surprises, disable Memories and paste a tidy profile (role, goals, deliverable styles).
Pro tip: list current quarterly goals and forbidden outputs (e.g., âNo hashtags. Keep to 120â160 words.â).
4) GPTs (shareable presets):
Package a use-case (e.g., âPitch Polisherâ) with instructions + knowledge files.
Great for teams; weak for multi-thread org. Use Projects for ongoing work, GPTs for repeatable tasks youâll share.
Simple recipe:
New/casual user â Memories â graduate to Projects â optionally add Custom Instructions â build/share GPTs for team workflows.
Other Relevant AI News!
đ§ Googleâs Gemini âcomputer useâ model can operate a headless browser to complete tasks end-to-endâtry a hosted demo and watch it navigate, click, and fill forms in a VM.
đ°ď¸ Perplexity Cometâtheir AI browserâopens to everyone for free, but early hands-on suggests it trails Chrome-level extensions from rivals in reliability; still worth a spin for research flows.
đźď¸ HunyuanImage 3.0 goes open-source and hits top-tier quality across logos, comics, and photo realismâanother strong push from Chinaâs open model ecosystem (rankings).
đď¸ xAI Imagine v0.9 adds snappy text-to-video previews from infinite scroll image generationsâfun for ideation, not production-ready yet.
đ§Š Google Opal (no-code AI apps) expands to ~50 new countries and adds debuggingâa gentle on-ramp for building lightweight agentic tools without wrangling infra.
Golden Nuggets
âď¸ Nvidia is building trillion-dollar AI factories that could become as vital as todayâs energy grids.
đŹ OpenAIâs Sora 2 update adds tighter copyright rules and an API, marking a new phase for AI-generated video.
đ§Š Samsungâs 7-million-parameter Tiny Recursive Model proves small models can now outreason frontier LLMs.
đ ChatGPT is becoming more personal, learning your context through apps, projects, and memories.
Have any feedback for us? Please reply to this email (I read them all!)
Until our next AI rendezvous,
Anthony | Founder of Uncover AI