https://www.hollywoodreporter.com/business/digital/openai-sh..., https://archive.ph/ABkeI
I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
Just that they took down some "io" mentions because of some trademark dispute with a third party "iyo".
Disney Exits OpenAI Deal After AI Giant Shutters Sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
A source familiar with the matter tells The Hollywood Reporter that Disney is also exiting the deal it signed with OpenAI last year, in which it pledged to invest $1 billion in the company and agreed to license some of its characters for use in Sora.
“As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson said. “We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
What would you place here anyways? Chegg and Stack Overflow?
Weil's now heading "AI for Science": https://www.pymnts.com/personnel/2025/openais-chief-product-...
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
i think it's clear cloud hosted is the actual future, which people have predicted for decades. it will never make financial sense to duplicate what you can get for cheap, because it's oversubscribed, with economies of scale and "if we let this run idle it's losing us money" pressure, for hardware found in a datacenter.
this has been the case for a long while now, and will increasingly be so as data centers buy up all the everything.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
Coding is where the money is. https://news.ycombinator.com/item?id=46432791#46434072
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
That narrative will implode like Sora later this year.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.
And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.
Even with years as a principal engineer at a company with high coding standards and engineering processes?
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.
I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.
Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
Any platform which focusses on AI generated videos is doomed.
sir, have you seen tiktok?
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
https://finance.yahoo.com/news/openai-sora-app-struggling-st...
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
1. https://dev.to/tyson_cung/amazon-lost-63m-orders-after-ai-co... 2. https://venturebeat.com/security/meta-rogue-ai-agent-confuse... 3. https://x.com/elonmusk/status/2031352859846148366
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
Perhaps most people are absolutely devoid of any taste of what makes art? I dont know.
That said, there are still people with exceptional aesthetic sensibilities in the tech field, obviously. They're just largely not in this space.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from foreign nations that bought up all the American businesses we built.
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
I'd rather eat poison
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
As a onetime semi-pro musician, with decades of live performance and sound design experience:
I would rather burn my beloved instruments publicly and pee on the fire.
> It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands
The problem is, to create a brand, you need to be able to protect it against rivals either ripping you off, or diluting it.
The same mechanism that protects "big" IP is also protect everyone else, even the small people.
> they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses
They already do that for music. But the issue is this, if we want culture, we need to find a way to pay for it. Is it possible for a bunch of mates to make enough money to live on playing in a local band? not really. They can only really make money if they either have a viable local gigging scene, or large enough online following to sell merch/patreon.
The big IP merchants were quite keen for videogen, because they sense that its possible to cut out the expensive artists. If they can not pay actors, writers, artists, then its way more profitable for them. This is part of the reason why AI hasn't been hit with the napster ban hammer.
I think the other thing to remember is that creating good IP is hard, and you can't really just pull it out of your arse after 5 minutes. The original seed takes a long time to refine, test, evolve. Even the half arsed sequels require work.
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
Where can I get this data?
https://variety.com/2025/digital/news/youtube-trending-page-...
Bummer. It used to be at:
https://www.youtube.com/feed/trending
So last year, these were the top videos:
https://web.archive.org/web/20250324155132/https://www.youtu...
There's this, but it's nowhere near as good as seeing the actual videos:
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.
Not every place has LEGO incest porn… or whatever the kids are into these days.
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
Yes, revenge porn is very effective at causing harm, even though it can be generated.
No, because 'plausibly deniable' has never worked for social consequences and shame.
Yeah, marketing. Which is a huge market...
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
That is, hiring Meta-exec's who focus on gaming numbers with no care nor sensibility of product.
Wild really. Well done Sam.
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.
For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
Fixed that for you :-)
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
— https://www.businessinsider.com/openai-discontinues-sora-vid...
So yeah, focusing on world models
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
At a higher level of intelligence than many humans, current experience suggests
The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
Offerings like Kling and ByteDance are considered much better.
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
For an app to suggest a personal relationship with you is ridiculous.
it reads as "we want to tell you that what you made with sora mattered, but we all know it didn't".
Better for OAI to spend their human and compute resources on something else.
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if you control it vs someone else, which is why every company is racing towards it.
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
Want to hear the one TRICK most people forget when doing X...?Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
xAI doesn't have "content moderation" around adult content, so that usage is quite popular.
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
Will be interesting to see.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
You didn't at least puff a little ack through your nostrils for that one?
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
Or before! Either is mandatory to actually learn the content.
This somewhat makes whole NotrbookLM less useful, but still.
Sometimes I'll take deep research output and listen to it too that way.
^ this is important.
Otherwise you may very well be missing anything really surprising or novel.
See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where
> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
24/7 titillation is boring
TikTok and social media is a strange mix of both, people posting response videos to everything.
Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.
The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
But I guess the desire to create something that others would consume is also different from the desire to simply create.
The musician in me just shed a tear
There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.
For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.
I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart.
There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.
And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription.
I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive.
I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper.
https://www.hollywoodreporter.com/business/digital/openai-sh...
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
Yes. I have noticed that is close to impossible to get good deals on flights, hotels, or even good discounts on-line. Sellers have all the information from consumers that they need to maximize their profit and extract the maximum amount from consumers. Dynamic pricing is making it a personalized experience, so I personally pay the maximum I possible can.
No room to get a fair price anymore.
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
Is it happening? :) /s
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
and others. There are free to use tools also.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
No it didn't; OpenAI had control.
Saying Sora democratised video generation is like saying that landlords democratised home ownership.
There's a web interface as well.
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.
Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
> Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output
I did not say anything about consumption of the output. Maybe you misread what I wrote, it is about energy consumption. > Both a movie and a language model can cost
But we weren't comparing cost of the movie to cost of a language model > can cost tens or hundreds of dollars
But we weren't talking about dollars, we were talking about energy.We're clearly exploring different questions.
CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
> CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I've literally laughed at loud after reading this.I can't believe you're stretching this in a good faith.
But if you are - well, you're certainly have a unique perspective.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.
But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.
The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.
Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.
It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
> manipulated imagery
And we thought iPhone camera videos were bad... (they were (and are) though)- You're making unsubstantiated claim
- personally targeting someone you don't even know
- in order to celebrate presumed success of a mass fraud?
Novels, cinema, television, comic books, etc.
They were all considered careless skill-free slop at some point.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
It much easier to make Qwen animate tankman than it's to make any western model to generate indigenous people dancing because cough cough naked skin is baaaaad. Except this Musk one that will nonetheless affected by all the copyright mess.
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m...
Most people serious about this stuff usually have their own pipelines.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
These are open weight models, so you can fine tune them on Lego content… But presumably they already have enough training data since they were made by Chinese companies who don’t give a shit about Western IP rights.
(This sort of question, and the Grok sexual abuse, is why I'd like to see mandatory invisible watermarks on generated images/video)
To me it seems it was "Disney gets shares and we get to use their characters in Sora".
Even if Sora breaks even, why would you gift Disney stock? It's not like they actual gave 1B to openai.
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us?
I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF.
[0] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit?
This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character.
To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers.
So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy.
Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time.
It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment.
Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself.
He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly.
I suspect they promised synthetic movies but it quickly became clear that they were never going to be able to deliver on this.
Slick fifteen second lulz-clips, sure, but I don't think they can make several of them consistent enough to fit into a larger video narrative without the audience finding it jarring and incoherent.
Perhaps legal at Disney also concluded that the output wouldn't be possible to copyright, which is their core business.
That story can’t be true
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
It was a party trick. I can't remember the last time I touched it. That's what SORA is, or was.
There were social games that used it as a feature, and it was fun when it worked, but it had to be disabled soon as it drained the battery so fast.
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
The cost must have been a key reason for the shutdown.
End is near.
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
Sora was a perfect example of using a lot of compute to generate the video -> we need a lot of GPUs -> a lot of RAMs -> energy and land
I am predicting in the next 6 months RAM shortage will soften, not too much, because war in the Middle East will have additional impact for some time.
There didn't seem to be any marketing for it. Like I can't even remember an ad for it or any content creator type of person pushing Sora actively.
To get access to Sora I believe you needed to be on a paid plan?
It's really difficult to get user generated content going when it's behind a paywall.
It's also hard to tell if this means that openai is in trouble, or if this is just a badly managed product that deserved to be killed. With the negative sentiment on openai, folks might think the former.
The network effects of the other two platforms are too strong, and a value prop of “watch similar videos but they’re all AI” is not strong for consumers.
Also, say what you want about AI slop, but I was on sora a lot for a few weeks and there was a real explosion of creativity on there. It felt new and exciting and creators were engaging with each other and sharing feedback and tips. I generated a ton of videos and surprised myself with a flury of creative ideas.