> In particular, we'd like to acknowledge the remarkable creative output of Japan--we are struck by how deep the connection between users and Japanese content is!
Translation from snake speech bs: We've been threatened by Japanese artists via their lawyers that unless we remove the "Ghibli" feature that earned us so much money, and others like it, we're going to get absolutely destroyed in court.
My hunch is that openai used ghibli as the example in their earlier dall-e blog posts strategically because anime was earlier said by the PM not to be protected by copyright in training. OpenAI is always sneakier than most people give them credit for.
I'm pretty sure this is in response to the flood of Sora anime parodies that have flooded TikTok in the past 48 hours. Seems like OpenAI is acknowledging some strongly worded letters from anime rights holders rather than individual artists, or the response wouldn't be this swift.
Most independent artists will disagree with this statement. They do it for passion, to communicate, to tell stories, to fulfill their own urges. Some works incidentally hit a sweet spot and become commercial successes, but that's not their purpose. On the other hand, the 'art' you see being marketed around you is made specifically to be marketed and sold, with little personal connection to the artist, and often against their own preferences. That's "content".
Is that what they tell you when you’re standing in the gallery with a checkbook? Or in the boardroom with a signature?
No, you almost never see art that wasn’t meant to be sold. Public art pieces are commissioned (sold), art in galleries were created by professional artists (even if commercially unsuccessful) 99.99999% of the time.
Surely if this wasn’t true, you could point to a few specific examples of art — or even broad categories of art — that weren’t made to be sold and that you have personally seen?
I think you're just interpreting the meaning of "made to be sold" very literally. Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold; but that is not the main motivation behind making the art, where commercial "art" - advertising, mainstream cinema, pop music, most art galleries, anime, 80% of what you see in arts and crafts fairs, pieces in IKEA - is created with profit as the main motive.
Going back to the origin of this, stating that Ghibli style videos generated with SORA (which the OP initially called "content") are equivalent to Studio Ghibli movies because they are both "art made to be sold" would be wild. A film like Spirited Away took over 1 million hours of work, if making money was the main goal it would have never happened.
> Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold
"they want their to be appreciated, so they expect pieces to be sold" is a clever trick but one is not related to the other. One could want their art to be appreciated and never sell it, but virtually no one would see this art for a variety of reasons including the fact that marketability increases visibility and that there is very, very little amateur art that is worth looking at, much less promoting to a larger audience.
It seems you agree that in fact art (that anyone sees) is overwhelmingly made to be sold.
I didn't say anything about their "main motivation" and neither you nor I (nor even the artist, frankly) could say much about what someone's main motivation is.
What we can say is that nearly all of the art anyone sees was in fact made to be sold, which is the specific claim that I made.
I don't think any supposes it does. They're arguing that the word choice implies something about the speaker's value system and the place that art or human culture has in it.
Wait until they coopt the word "art" to include AI-generated slop. I dread the future discussion tarpits about whether AI creations can be considered art.
I don't understand some parts of this, the writing doesn't seem to flow logically from one thought to another.
> Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
> We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.
> The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable.
The first part of this paragraph implies that the video generation service is more expensive than they expected, because users are generating more videos than they expected and sharing them less. The next sentence then references sharing revenue with "rightsholders"? What revenue? The first part makes it sound like there's very little left over after paying for inference.
Secondly, to make a prediction about the future business model - it sounds like large companies (disney, nintendo, etc) will be able to enter revenue sharing agreements with OpenAI where users pay extra to use specific brand characters in their generated videos, and some of that licensing cost will be returned to the "rightsholders". But I bet everyone else - you, me, small youtube celebrities - will be left out in the cold with no controls over their likeness. After all, it's not like they could possibly identify every single living person and tie them to their likeness.
2. They might get into trouble charging users to generate some other entity's IP, so they may revenue-share with the IP owner.
They're probably still losing money even if they charge for video generation, but recouping some of that cost, even if they revshare, is better than nothing.
You got the last paragraph wrong. They need to negotiate with rights holders on the revenue split. They’re hoping that the virality aspect will be more important to rights holders than money alone, but they will of course also give money to rights holders.
Or, in other words: here’s Sam Altman saying to Disney “you should actually be grateful if people generate tons of videos with Disney characters because it puts them front and center again.”, but then he acknowledges that OpenAI also benefits from it and therefore should pay Disney something. But this will be his argument when negotiating for a lower revenue share, and if his theory holds, then brands that don’t enter into a revenue share with OpenAI because they don’t like the deal terms may lose out on even more money and attention that they would get via Sora.
"Sora Update #4: Through a partnership with Google, Meta and Snap Inc., you will be able to generate tasteful photos of the cute girl you saw on the bus. She will receive a compensation of $0.007 once she signs our universal content creators' agreement."
It's confusing to me because charging money is implied - "we are going to have to somehow make money" - but not actually stated, and then it jumps past the revenue structure into sharing money with "rightsholders".
It has left me wondering if, instead of just charging users, they would start charging "rightsholders" for IP protection. I could see a system where e.g. Disney pays OpenAI $1 million up front to train in recognition of Mickey Mouse, and then receives a revenue share back from users who generate videos containing Mickey Mouse.
“Dear rights holders, we abused your content to train our closed model, but rest assured we’ll figure out a way to get you pennies back if you don’t get too mad at us”
It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.
Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.
Generation for personal use is not illegal, as far as I know.
Don't worry, you can write "dumb ass" here without needing to use algospeak. This isn't Instagram or TikTok and you won't be unpersoned by a "trust and safety" team for doing so.
P.S. No need for a space after your meme arrows :-)
copyright is such a poorly designed tax on our society and culture. innovations like Sora should be possible, but faces huge headwinds because... Disney wants even more money?
the blind greed of copyright companies disgusts me
Society has benefited hugely from copyright law. In fact, the first copyright laws were created in response to desires to have education material/a better educated society.
Saying 'disney/laws bad because I want billionaire corporation to have access to something they know they don't but built their business model around using anyway.' isn't saying anything but 'I want what I want'.
If anything society should take this slow and do it right, not throw out hundreds of years of thinking/decisions/progress because 'disney' and 'cool new tech'.
We should not bend/throw away laws because billion dollar industry chose to build a new business model around ignoring them. Down that path lies dystopia.
> People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
What did OpenAI expect, really? They imposed no meaningful generation limits and and "very small audiences" is literally the point of an invite-only program.
Update after more testing: looks like every popular video game prompt (even those not owned by Nintendo) triggers a Content Warning, and prompting "Italian video game plumber" didn't work either. Even indie games like Slay the Spire and Undertale got rejected. The only one that didn't trigger a "similarity to third party content" Content Violation was Cyberpunk 2077.
Even content like Spongebob and Rick and Morty is now being rejected after having flooded the feeds.
OpenAI likely intended users to post every video they make to the public feed instead of just using the app as a free video generator. (i.e. Midjourney)
Of course, another reason that people don’t publish their generated videos is because they are bad. I may or may not be speaking from experience.
my read: they made the app look like tiktok, and were expecting people to make tiktok style viral videos. instead, what people are making is cameo-style personalized messages for their friends, starring mario.
And I don't think you can revenue share these generations with rights owners just like that. What rights owner will let their "product" be depicted in any imaginable situation by any prompt by anyone in the planet? Words are powerful and images a 1000 words worth, videos are a millionth fold... I've seen a quick Sora video from OpenAI themselves I believe of the real life Mario Bros Princess, a rather voluptuous one, playing herself on a console and the image stuck. And it's not just misuse, distortion or appropriation but also association: imagine a series of very viral videos of Pikachu drinking Coke or a fan series of Goku with friends at KFC... it could condition, or steal, future marketing deals for the rights holders.
This is a non-starter, unless you own a "license to AI" from the rights owner directly, such as an ad agency that uses Sora to generate an ad it was hired to do.
Indeed. If you read between the lines that’s clearly it.
And on that note can I add how much I truly despise sentences like this:
> We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).
To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl. It’s sort of a conceptual cousin to concepts like banning someone from a service without even telling them or using words like “sunset” instead of “cancel” and so on.
What that sentence actually fucking means is that a lot of powerful people with valuable creative works contacted them with lawyers telling them to knock this the fuck off. Which they thought was appropriate to put in parentheses at the end as if it wasn’t the main point.
Wow, I am sure excited for your new kind of interactive fan fiction of my properties. It will accrue us a lot of value! Anyway, please do not use our properties.
Nice but there's no need for the "please": it's not a request, it's a demand from an official lawyer-penned, strongly-worded, lawsuit actionable letter.
You may not like their message, but the style can be found in practically any public communication from any corporation. Read a layoff announcement from Novo Nordisk as an example [1]. No difference.
This is what I don’t like about HN, manufactured outrage when one dislikes the messenger. No substance whatsoever.
When users are given such a powerful tool like Sora, there will naturally be conflicts. If one makes a video putting a naked girl in a sacred Buddhist temple in Bangkok, how do you think Thai people will react?
This is OpenAI attempting balancing acts between conflicting interests, while trying to make money.
I actually really like that comment. It's an example of classic doublespeak and it's a shame that "Open"AI uses it and we as society tolerate that (as well as other companies of course)
It feels like big exploitative multimedia companies are the main force fighting big exploitative ML companies over copyright of art.
I wish big exploitative tech companies would fight them over copyright of code but almost all big exploitative tech companies are also big exploitative ML companies.
I'm not really disagreeing with you, but I think it's more about salesmanship than anything else. "We released v1 and copyright holders immediately threatened to sue us, lol" sounds like you didn't think ahead, and also paints copyright holders in a negative light; copyright holders who you need to not be enemies but who, if you're not making it up, are already unhappy enough to want to sue you.
Sam's sentence tries to paint what happened in a positive light, and imagines positive progress as both sides work towards 'yes'.
So I agree that it would be nice if he were more direct, but if he's even capable of that it would be 30 years from now when someone's asking him to reminisce, not mid-hustle. And I'd add that I think this is true of all business executives, it's not necessarily a Silicon Valley thing. They seem to frequently be mealy-mouthed. I think it goes with the position.
> To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl.
To me that's Sam Altman in a nutshell. I remember listening to an extended interview with him and I felt creeped out by the end of it. The film Mountainhead does a great job capturing this.
It’s telling to how society values copyright of different media that 4 years into people yelling about these being copyright violation machines the first time there’s been an emergency copyright update has been with video.
So people who spend time working on code or art should have exactly zero protection against somebody else just taking their work and using it to make money?
No, but the current system is totally idiotic. Why not have a fixed timeframe i.e. 30-50 years to make money? Life of the author + x years is stupid not only because it's way too long, it keeps going until way after the creator is no longer benefitting, and it can cause issues with works where you don't know who the author is so you can't cleanly say it's old enough not to have copyright.
I'm not sure for most (specifically smaller, who need the most protection) creators this would actually change very much. Media typically makes money in it's first few years of life, not 70 years on.
This does not demonstrate a sound understanding of how the public domain works, why copyright lengths have been extended so ferociously over the last century (it's shareholders who want this), nor the impact it has both on creative process and public conversation.
This is a highly complex question about how legal systems, companies, and individual creatives come in conflict, and cannot be summarized as a positive creative constraint / means to celebrate their works.
I develop copyright material from the letter and the images that I've both sold to studios and own myself. Copyright lengths are there to prevent the shareholder class from rapid exploitation. Once copyright declines to years not decades, shareholders will demand that be exploited rather than new ideas. The public conversation is rather irrelevant as the layperson doesn't have a window into the massive risk, long-term development required to invent new things, that's how copyright is not a referendum, it's a specialized discourse. Yes the idea of long-term copyright developed under work-for-hire or individual ownership can be easily summarized. License, sample, or steal. Those are the windows.
- owners of large platforms who don't care what "content"[0] is successful or if creators get rewarded, as long as there is content to show between ads
- large corporations who can afford to protect their content with DRM
Is that correct?
Do you expect it to play out differently? Game it out in your head.
Vague. Are you talking about reasons to create like the joy of creating? Your bio describes you as a 'tech entrepreneur', not 'DIY tinkerer'. So I'll assume that when you spend a great deal of time entrepreneuring something, you do so with the hope of remuneration. Maybe not by licensing the copyright, but in some form.
Permissive licenses are great in software, where SAAS is an alternative route to getting paid. How does that work if you're a musician, artist, writer, or filmmaker who makes a living selling the rights to your creative output? It doesn't.
> Vague. Are you talking about reasons to create like the joy of creating?
That’s one of them, but I really don’t have to be specific about the reasons. I just have to point out the existence of permissively licensed works. You said:
> Great, you've just removed any incentive for people to make anything.
This is very obviously untrue. Perhaps you meant to say “…you’ve just removed some incentives for people to make some things”?
"Hi, as the company that bragged about how we had ripped off Studio Ghibli, and encouraged you to make as many still frames as possible, we would now like to say that you are making too many fake Disney films and we want you to stop."
These attempted limitations tend to be very brittle when the material isn’t excised from the training data, even more so when it’s visual rather than just text. It becomes very much like that board game Taboo where the goal is to get people to guess a word without saying a few other highly related words or synonyms.
For example, I had no problem getting the desired results when I promoted Sora for “A street level view of that magical castle in a Florida amusement area, crowds of people walking and a monorail going by on tracks overhead.”
Hint: it wasn’t Universal Studios, and unless you know the place by blind sight you’d think it had been the mouse’s own place.
On pure image generation, I forget which model, one derived from stable diffusion though, there was clearly a trained unweighting of Mickey Mouse such that you couldn’t get him to appear by name, but go at it a little sideways? Even just “Minnie Mouse and her partner”? Poof- guardrails down. If you have a solid intuition of the term “dog whistling” and how it’s done, it all becomes trivial.
Absolutely. Though the smarter these things get, and the more layers of additional LLMs on top playing copyright police that there are, I do expect it to get more challenging.
My comment was intended more to point out that copyright cartels are a competitive liability for AI corps based in "the west". Groups who can train models on all available culture without limitation will produce more capable models with less friction for generating content that people want.
People have strong opinions about whether or not this is morally defensible. I'm not commenting on that either way. Just pointing out the reality of it.
It's a matter of time. I imagine they'll get more effect suppressing activations of specific concepts within the LLM, possibly in real time. I.e. instead of filtering prompt for "Mickie Mouse" analogies, or unlearning the concept, or even checking the output before passing it to user, they could monitor the network for specific activation patterns and clamp them during inference.
> "We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)"
Marvelous ability to convolute the simple message "rightholders told us to fuck off"
Obviously, OpenAI could have had copyright restrictions in place from the get-go with this, but instead made an intentional decision to allow people to generate everything ranging from Spongebob videos to Michael Jackson videos to South Park videos.
Today, Sora users on reddit are pretty much beside themselves because of newly enabled content restrictions. They are (apparently) no longer able to generate these types of videos and see no use for the service without that ability!
To me it raises two questions:
1) Was the initial "free for all" a marketing ploy?
2) Is it the case that people find these video generators a lot less interesting when they have to come up with original ideas for the videos and cannot use copyright characters, etc?
The logic is that if they don't do it then Meta or some other company will & they have decided it's better that they do it b/c they are the better, more righteous, & moral people. But the main issue is I don't understand how they went from solving general intelligence to becoming an ad sponsored synthetic media company without anyone noticing.
Oh we all noticed, but this is a new level of entrepreneurial narcissism and corporate gas lighting.
Maybe one day Sam Altman will generally be perceived as who he actually is
He is the boy wonder genius who will usher an era of infinite abundance but before he does that he has to take a detour to generate a lot of synthetic media & siphon a lot of user queries at every hour of every day so that advertisers can better target consumers w/ their plastic gadgets & snake oils. I'm sure they just need a few more trillions in data center buildouts & then they can get around to building the general purpose intelligence that will deliver us to the fully automated luxurious communist utopia.
As someone who is concerned about how artists are supposed to earn a living in a ecosystem where anyone can trivially copy any style effortlessly, it does sound better than the status quo?
The fact that LLMs are trained on humans data yet the same humans receive no benefits from it (cannot even use the weights for free, even if they unwillingly contributed to it existing), kind of sucks.
What alternative is there? Let companies freely slurp up people's work and give absolutely nothing back?
You make a good point. They may well as admit at this point that curing cancer, new physics, and AGI aren't going to happen very soon.
What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc. Why package it as an app for your neice to make viral videos that's bound to lose money with every click? Just sell it for $50k/hr of video to someone with deep pockets. Is it just a publicity stunt?
> What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc.
Because it’s not good enough, I would assume. Hard to see it actually being useful in this role.
The query data they are collecting can be used for ad targeting. Remember, if you're not paying for it (and in many cases even when you are paying for it) then the data collected from your use of the application is going to be used by someone to make money one way or another. Google made billions from search queries & OpenAI has an even better query/profiling perspective on its users b/c they are providing all sorts of different modalities for interaction, that data is extremely valuable, analogous to how Google search queries (along w/ data from their other products) are extremely valuable to corporate marketing departments that are willing to pay a premium for access to Google's targeting algorithms.
> AI slop tictoc to waste millions of human-hours.
Don't forget the power it consumes from an already overloaded grid [while actively turning off new renewable power sources], the fresh water data centers consume for cooling, and the noise pollution forced on low-income residents.
As a european, i don't know if it's more funny or sad that american citizens close tho data centers are effectively subsidizing ai for the rest of the world by paying more for their electricity since the datacenters are mostly there
Well, yeah, but that stuff was all bullshit, whereas the fake tiktok kind of exists and might keep the all-important money taps on for another six months or so.
>Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
Once again, Scam Altman looking for excuses to raise more money. What a joke…
It is sad (and predictable, PR- and legal-wise) that there was no mention of the Ghibli Studio.
I would be actually moved if there was some genuine in the line of "We are sorry - we wanted to make a PR stunt, but we went to hard." and offered real $ for that. (Not that I believe it is going to happen, as GenAI does not like this kind of precedence.)
Someone I know uses chatgpt a lot. Not because they find it incredibly valuable. But because they want to stick it to the VC's funding OAI and increase their costs with no revenue.
So this is why you have to be careful about usage numbers. The only true meaningful number is about those who are contributing towards revenue. Without that OAI is just a giant money sink.
I suspect this has the opposite effect. More daily users == higher valuation, so more profit if the VCs decide to sell. There's no pressure on OpenAI to become profitable yet.
Is this a roundabout way to say that they've realised that people are using their service to make porn of celebrities and fictional characters in the entertainment industry, and aim to figure out a way to keep making money from it without involving "rightsholders" in scandals?
The detail that rightsholders seem to be demanding a revenue share is interesting. That sounds administratively and technologically very complex to implement and probably also just plain expensive to implement.
With some back of the napkin math, I am pretty sure you're off by at least two orders of magnitude, conceivably 4. I think 2 cents per video is an upper limit.
Generally speaking, API costs that the consumer sees are way higher than compute costs that the provider pays.
EDIT: Upper limit on pure on-going compute cost. If you factor in chip capital costs as another commentator on the other thread pointed out, you might add another order of magnitude.
I suspect amortized training costs are only a relatively small fraction of the amortized hardware costs (i.e. counting amortized hardware costs already accounts for the large fraction of the cost of training and pulling out training as a completely separate category double counts a lot of the cost).
It’s more a ballpark since exact numbers vary and OpenAI could be employing shenanigans to cut costs, but in comparison, Veo 3 which has similar quality 720p video costs $0.40/second for the user, and Sora’s videos are 10 seconds each. Although Veo 3 could cost more or less to Google than what is charged to the user.
I suspect OpenAI’s costs to be higher if anything since Google’s infra is more cost-efficient.
This "but it's too hard to implement" excuse never made sense to me. So it's doable to make a system like this, to have smart people working on it, hire and poach other smart people, to have payments systems, tracking systems, personal data collection, request filtering and content awareness, all that jazz, but somehow all of that grinds to a halt the moment a question like this arises? and it's been a problem for years, yet some of the smartest people are just unable to approach it, let alone solve it? Does it not seem idiotic to see them serve 'most advanced' products over and over, and then pretend like this question is "too complex" for them to solve? Shouldn't they be smart enough to rise up to that level of "complexity" anyway?
Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.
Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.
So that sounds like they "released" this fully aware it would generate loads of hype, but never ever be legally feasible to release at scale, so we can expect some heavily cut down version to eventually become publicly released?
Feels very much like a knee-jerk response to Facebook releasing their "Vibes" app the week before. It's basically the same thing, OpenAI are probably willing to light a pile of money on fire to take the wind out of their sails.
I also don't think the "Sam Altman" videos were authentic/organic at all, smells much more like a coordinated astroturfing campaign.
I don't have access but it seems you can impose a friend into a video? Are we not rightsholders to our own likeness? It seems like a person should be able to block a video someone shares without their consent or earn revenue then if their likeness is used.
> You have to explicitly opt into sharing your likeness with permission controls.
Ok... how is that supposed to work? I don't have an OpenAI account, there are no permission controls for me. Someone else could easily upload a picture of me, no?
This seems... pretty easy to get around? There are already open weight models which can take any photo and audio and make a video out of it with the character speaking/singing/whatever, and it runs on normal consumer hardware.
So you wouldn't know what the three numbers are ahead of time, you'd have to be using a real time face replacement model (or I guess live-switching between pre-rendered clips) and somehow convince the app that you're the iPhone selfie cam.
But at that point you might as well just use WAN 2.2 Animate and forget about Sora.
That's more than I expected from them, genuinely. But it still doesn't seem like a very solid solution. I wonder how much variation in look and voice it accepts?
My partner likes to cosplay, and some of the costumes are quite extensive. If they want to generate a video in a specific outfit will they need to record a new source video? The problem exists in the other direction, too. If someone looks a lot like Harrison Ford, will they be able to create videos with their own likeness?
I wonder how this extends to videos with multiple people, as well. E.g. if both my friend and I want to be in a video together.
It’s not like making a video of someone saying a number, given a single photo and any voice sample is a very difficult problem today. We can just fast-forward a few weeks into a world where this „registration“ is already broken.
At stay22 we've built a way to monetize travel videos automatically with multi-modal LLM, starting with travel and soon into retail. It's already live and testing with a few youtubers, via rev share. It automatically detects destinations, activities, hotels, implicitly throug the visuals or explcitily from the vlogger's voice
> In particular, we'd like to acknowledge the remarkable creative output of Japan--we are struck by how deep the connection between users and Japanese content is!
Translation from snake speech bs: We've been threatened by Japanese artists via their lawyers that unless we remove the "Ghibli" feature that earned us so much money, and others like it, we're going to get absolutely destroyed in court.
My hunch is that openai used ghibli as the example in their earlier dall-e blog posts strategically because anime was earlier said by the PM not to be protected by copyright in training. OpenAI is always sneakier than most people give them credit for.
> OpenAI is always sneakier than most people give them credit for.
There's usually more useful information in what Sam Altman specifically doesn't say than what he does.
I'm pretty sure this is in response to the flood of Sora anime parodies that have flooded TikTok in the past 48 hours. Seems like OpenAI is acknowledging some strongly worded letters from anime rights holders rather than individual artists, or the response wouldn't be this swift.
hey don't forget about nintendo too
> Japanese “content”
Sickening
No human writes like this. If he actually did it’s worrying.
Would you mind explaining? As a non native English speaker I may have missed some nuance.
The word “content” is often perceived as devaluing creative work: https://www.nytimes.com/2023/09/27/movies/emma-thompson-writ...
Paradoxically, it signals indifference or disregard about the actual contents of a work.
The word content. Art would have been the appropriate term.
Disagree, it is content. The Japanese anime (referenced) is specifically made to be marketed and sold.
Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.
Art is overwhelmingly not a charity project from artists to the commons.
I presume "by virtue of you seeing it" includes other conditions or I don't understand how you can claim such a thing.
Where exactly have you seen art that wasn’t made to be sold? Be specific.
Most independent artists will disagree with this statement. They do it for passion, to communicate, to tell stories, to fulfill their own urges. Some works incidentally hit a sweet spot and become commercial successes, but that's not their purpose. On the other hand, the 'art' you see being marketed around you is made specifically to be marketed and sold, with little personal connection to the artist, and often against their own preferences. That's "content".
Is that what they tell you when you’re standing in the gallery with a checkbook? Or in the boardroom with a signature?
No, you almost never see art that wasn’t meant to be sold. Public art pieces are commissioned (sold), art in galleries were created by professional artists (even if commercially unsuccessful) 99.99999% of the time.
Surely if this wasn’t true, you could point to a few specific examples of art — or even broad categories of art — that weren’t made to be sold and that you have personally seen?
I think you're just interpreting the meaning of "made to be sold" very literally. Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold; but that is not the main motivation behind making the art, where commercial "art" - advertising, mainstream cinema, pop music, most art galleries, anime, 80% of what you see in arts and crafts fairs, pieces in IKEA - is created with profit as the main motive.
Going back to the origin of this, stating that Ghibli style videos generated with SORA (which the OP initially called "content") are equivalent to Studio Ghibli movies because they are both "art made to be sold" would be wild. A film like Spirited Away took over 1 million hours of work, if making money was the main goal it would have never happened.
> Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold
"they want their to be appreciated, so they expect pieces to be sold" is a clever trick but one is not related to the other. One could want their art to be appreciated and never sell it, but virtually no one would see this art for a variety of reasons including the fact that marketability increases visibility and that there is very, very little amateur art that is worth looking at, much less promoting to a larger audience.
It seems you agree that in fact art (that anyone sees) is overwhelmingly made to be sold.
I didn't say anything about their "main motivation" and neither you nor I (nor even the artist, frankly) could say much about what someone's main motivation is.
What we can say is that nearly all of the art anyone sees was in fact made to be sold, which is the specific claim that I made.
> nearly all of the art anyone sees
See comment above.
Yes you're just restating my thesis but with the air of disputing it.
Buddy your thesis is that art does not exist because of capitalism. That is a ridiculous 'thesis'.
... what? Not sure how you got that, but no, that's not what I believe.
Here, I'll restate it:
> Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.
> Art is overwhelmingly not a charity project from artists to the commons.
> almost never see art that wasn’t meant to be sold
Because most art isn't in a gallery or store. You quite literally aren't seeing it.
In other words:
> Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.
Art is not an objective definition, it is the subjective experience of the observer. Content is a format.
The involvement of money does not preclude a work from being considered art. Your claim is cynical and ahistorical.
it also doesn’t preclude it from being content.
I don't think any supposes it does. They're arguing that the word choice implies something about the speaker's value system and the place that art or human culture has in it.
Well, yes, but I didn’t really think that needed to be said.
some of it are cultural products too.
Wait until they coopt the word "art" to include AI-generated slop. I dread the future discussion tarpits about whether AI creations can be considered art.
A piece of wood, a rock can be pretty/interesting to look at. It is not art. AI slop might be pretty/interesting, but it is not art.
My person in deity that future has been here for a while now.
Not only do they consider it art, they call what you and I consider art "humanslop" and consider it inferior to AI.
This sounds a lot like boomers complaining about kitty litter instead of bathrooms in elementary school
It's easy to get too chronically online and focus on some tiny weird thing you saw when in fact it's just a tiny weird thing
None of us should be surprised. This joker has zero respect for the artistry of humans.
I don't understand some parts of this, the writing doesn't seem to flow logically from one thought to another.
The first part of this paragraph implies that the video generation service is more expensive than they expected, because users are generating more videos than they expected and sharing them less. The next sentence then references sharing revenue with "rightsholders"? What revenue? The first part makes it sound like there's very little left over after paying for inference.Secondly, to make a prediction about the future business model - it sounds like large companies (disney, nintendo, etc) will be able to enter revenue sharing agreements with OpenAI where users pay extra to use specific brand characters in their generated videos, and some of that licensing cost will be returned to the "rightsholders". But I bet everyone else - you, me, small youtube celebrities - will be left out in the cold with no controls over their likeness. After all, it's not like they could possibly identify every single living person and tie them to their likeness.
1. They need to charge users for generation.
2. They might get into trouble charging users to generate some other entity's IP, so they may revenue-share with the IP owner.
They're probably still losing money even if they charge for video generation, but recouping some of that cost, even if they revshare, is better than nothing.
You got the last paragraph wrong. They need to negotiate with rights holders on the revenue split. They’re hoping that the virality aspect will be more important to rights holders than money alone, but they will of course also give money to rights holders.
Or, in other words: here’s Sam Altman saying to Disney “you should actually be grateful if people generate tons of videos with Disney characters because it puts them front and center again.”, but then he acknowledges that OpenAI also benefits from it and therefore should pay Disney something. But this will be his argument when negotiating for a lower revenue share, and if his theory holds, then brands that don’t enter into a revenue share with OpenAI because they don’t like the deal terms may lose out on even more money and attention that they would get via Sora.
> After all, it's not like they could possibly identify every single living person and tie them to their likeness.
Wasn’t he literally scanning eye balls a couple years ago?
the scanning continues.
"Just look into the orb, bro."
"Sora Update #4: Through a partnership with Google, Meta and Snap Inc., you will be able to generate tasteful photos of the cute girl you saw on the bus. She will receive a compensation of $0.007 once she signs our universal content creators' agreement."
I don't get the confusion. He's saying that
(i) they will need to start charging money per generation (ii) they will share some of this money with rightsholders
It's confusing to me because charging money is implied - "we are going to have to somehow make money" - but not actually stated, and then it jumps past the revenue structure into sharing money with "rightsholders".
It has left me wondering if, instead of just charging users, they would start charging "rightsholders" for IP protection. I could see a system where e.g. Disney pays OpenAI $1 million up front to train in recognition of Mickey Mouse, and then receives a revenue share back from users who generate videos containing Mickey Mouse.
they will TRY to share this money ;)
Yes – "with rightsholders who want their characters generated by users. "
So it's not about reimbursing "rightsholders" they rip off. It's about giving a pittance to those who allow them to continue to do so.
Sorry, trying to give a pittance to them.
They will share the money with the rights holders large enough to sue them. Fuck the rest. Just as they’ve done with training material for ChatGPT.
> the writing doesn't seem to flow logically from one thought to another.
Neither has most of the stuff Sam has said since basically the moment he started talking.
It is possible, perhaps, that he is actually a very stupid person!
My read says intelligent sociopathic narcissist.
“Dear rights holders, we abused your content to train our closed model, but rest assured we’ll figure out a way to get you pennies back if you don’t get too mad at us”
It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.
Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.
Generation for personal use is not illegal, as far as I know.
you can use the images to harm someone’s reputation legally as long as you don’t represent them as real.
Wait, are you telling me Sam Altman has no regard for the law and thinks his own messianic endeavors are more important than that? Shocker!
> launch new sora update
> enable generating ghibli content since users are ADDICTED to that style
> willingly ignore the fact that the people who own this content don't want this
> wait a few days
> "ooooh we're so sorry for letting these users generate copyrighted content"
> disables it via some dumb ahh prompt detection algorithm
> dumb down the model and features even more
> add expensive pricing
> wait a few months
> launch new model without all of these restrictions again so that the difference to the new model feels insane
>dumb ahh prompt detection algorithm
Don't worry, you can write "dumb ass" here without needing to use algospeak. This isn't Instagram or TikTok and you won't be unpersoned by a "trust and safety" team for doing so.
P.S. No need for a space after your meme arrows :-)
I'm new to Sora. Which step are we in at the moment?
copyright is such a poorly designed tax on our society and culture. innovations like Sora should be possible, but faces huge headwinds because... Disney wants even more money?
the blind greed of copyright companies disgusts me
Society has benefited hugely from copyright law. In fact, the first copyright laws were created in response to desires to have education material/a better educated society.
Saying 'disney/laws bad because I want billionaire corporation to have access to something they know they don't but built their business model around using anyway.' isn't saying anything but 'I want what I want'.
If anything society should take this slow and do it right, not throw out hundreds of years of thinking/decisions/progress because 'disney' and 'cool new tech'.
We should not bend/throw away laws because billion dollar industry chose to build a new business model around ignoring them. Down that path lies dystopia.
Yeah, Nintendo called, and faster than expected.
> People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
What did OpenAI expect, really? They imposed no meaningful generation limits and and "very small audiences" is literally the point of an invite-only program.
Update after more testing: looks like every popular video game prompt (even those not owned by Nintendo) triggers a Content Warning, and prompting "Italian video game plumber" didn't work either. Even indie games like Slay the Spire and Undertale got rejected. The only one that didn't trigger a "similarity to third party content" Content Violation was Cyberpunk 2077.
Even content like Spongebob and Rick and Morty is now being rejected after having flooded the feeds.
I see a movie: The MoTrix, copyright blasting Soraddicts invent a new prompting language (or discover the one Altman seeded) as a way of evading Agents of the Entity, a © deity/program. Once unleashed, the world descends into HeroClix and ReadyPlayerOne slop simulation where original becomes indistinguishable from stolen.
I don’t understand, what do they mean very small audiences, am I not supposed to make video for myself?
OpenAI likely intended users to post every video they make to the public feed instead of just using the app as a free video generator. (i.e. Midjourney)
Of course, another reason that people don’t publish their generated videos is because they are bad. I may or may not be speaking from experience.
Can confirm.. I got access to the app yesterday and I have used it exclusively for making drafts and sending them to my friends without posting.
100%. I’m not comfortable sharing likeness of myself publicly. I send goofy stuff to friends. That was day 1, at least.
Day 2+ I haven’t used the app again.
my read: they made the app look like tiktok, and were expecting people to make tiktok style viral videos. instead, what people are making is cameo-style personalized messages for their friends, starring mario.
And I don't think you can revenue share these generations with rights owners just like that. What rights owner will let their "product" be depicted in any imaginable situation by any prompt by anyone in the planet? Words are powerful and images a 1000 words worth, videos are a millionth fold... I've seen a quick Sora video from OpenAI themselves I believe of the real life Mario Bros Princess, a rather voluptuous one, playing herself on a console and the image stuck. And it's not just misuse, distortion or appropriation but also association: imagine a series of very viral videos of Pikachu drinking Coke or a fan series of Goku with friends at KFC... it could condition, or steal, future marketing deals for the rights holders.
This is a non-starter, unless you own a "license to AI" from the rights owner directly, such as an ad agency that uses Sora to generate an ad it was hired to do.
Current limit seems to be 100 per rolling 24 hour period, so not unlimited but definitely huge given the compute costs.
Setting the limit that high for a soft launch is bizarre. I got access to Sora and got the gist of it with like 10 generations.
Indeed. If you read between the lines that’s clearly it.
And on that note can I add how much I truly despise sentences like this:
> We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).
To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl. It’s sort of a conceptual cousin to concepts like banning someone from a service without even telling them or using words like “sunset” instead of “cancel” and so on.
What that sentence actually fucking means is that a lot of powerful people with valuable creative works contacted them with lawyers telling them to knock this the fuck off. Which they thought was appropriate to put in parentheses at the end as if it wasn’t the main point.
Wow, I am sure excited for your new kind of interactive fan fiction of my properties. It will accrue us a lot of value! Anyway, please do not use our properties.
Nice but there's no need for the "please": it's not a request, it's a demand from an official lawyer-penned, strongly-worded, lawsuit actionable letter.
You may not like their message, but the style can be found in practically any public communication from any corporation. Read a layoff announcement from Novo Nordisk as an example [1]. No difference.
This is what I don’t like about HN, manufactured outrage when one dislikes the messenger. No substance whatsoever.
When users are given such a powerful tool like Sora, there will naturally be conflicts. If one makes a video putting a naked girl in a sacred Buddhist temple in Bangkok, how do you think Thai people will react?
This is OpenAI attempting balancing acts between conflicting interests, while trying to make money.
[1]-https://www.novonordisk.com/content/nncorp/global/en/news-an...
I actually really like that comment. It's an example of classic doublespeak and it's a shame that "Open"AI uses it and we as society tolerate that (as well as other companies of course)
If we’re going on HN rants, this bizarre tendency of reframing the blatantly obvious into something it isn’t doesn’t help any argument.
The messenger isn’t some random, disconnected third party here.
It feels like big exploitative multimedia companies are the main force fighting big exploitative ML companies over copyright of art.
I wish big exploitative tech companies would fight them over copyright of code but almost all big exploitative tech companies are also big exploitative ML companies.
Oracle to the rescue? What a sick, sad world.
I'm not really disagreeing with you, but I think it's more about salesmanship than anything else. "We released v1 and copyright holders immediately threatened to sue us, lol" sounds like you didn't think ahead, and also paints copyright holders in a negative light; copyright holders who you need to not be enemies but who, if you're not making it up, are already unhappy enough to want to sue you.
Sam's sentence tries to paint what happened in a positive light, and imagines positive progress as both sides work towards 'yes'.
So I agree that it would be nice if he were more direct, but if he's even capable of that it would be 30 years from now when someone's asking him to reminisce, not mid-hustle. And I'd add that I think this is true of all business executives, it's not necessarily a Silicon Valley thing. They seem to frequently be mealy-mouthed. I think it goes with the position.
Exactly. I really hope Altman gets whats coming for him.
> To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl.
To me that's Sam Altman in a nutshell. I remember listening to an extended interview with him and I felt creeped out by the end of it. The film Mountainhead does a great job capturing this.
> rightsholders
It’s telling to how society values copyright of different media that 4 years into people yelling about these being copyright violation machines the first time there’s been an emergency copyright update has been with video.
The only thing we need is an emergency copyright deprecation.
So people who spend time working on code or art should have exactly zero protection against somebody else just taking their work and using it to make money?
No, but the current system is totally idiotic. Why not have a fixed timeframe i.e. 30-50 years to make money? Life of the author + x years is stupid not only because it's way too long, it keeps going until way after the creator is no longer benefitting, and it can cause issues with works where you don't know who the author is so you can't cleanly say it's old enough not to have copyright.
I'm not sure for most (specifically smaller, who need the most protection) creators this would actually change very much. Media typically makes money in it's first few years of life, not 70 years on.
The shareholder class would demand rapid fire exploitation of © the moment it expired and the resulting media would be a soup of mediocrity. The idea is to recognize the highly creative have unique imaginations that invent paradigms that propel culture. Excluding that for 70+ years generates that. Had Lucas gained the rights to Flash Gordon (DeLaurentiis beat him to it) he'd never been forced to create the SW universe. Think about constraints as the path to progress.
This does not demonstrate a sound understanding of how the public domain works, why copyright lengths have been extended so ferociously over the last century (it's shareholders who want this), nor the impact it has both on creative process and public conversation.
This is a highly complex question about how legal systems, companies, and individual creatives come in conflict, and cannot be summarized as a positive creative constraint / means to celebrate their works.
I develop copyright material from the letter and the images that I've both sold to studios and own myself. Copyright lengths are there to prevent the shareholder class from rapid exploitation. Once copyright declines to years not decades, shareholders will demand that be exploited rather than new ideas. The public conversation is rather irrelevant as the layperson doesn't have a window into the massive risk, long-term development required to invent new things, that's how copyright is not a referendum, it's a specialized discourse. Yes the idea of long-term copyright developed under work-for-hire or individual ownership can be easily summarized. License, sample, or steal. Those are the windows.
Then the solution is fixing the problem, not removing any protections at all.
In fact, copyright should belong to the people who actually create stuff, not those who pay them.
Yes.
(The "takers" also do not have copyright protection.)
So basically the only winners should be:
- owners of large platforms who don't care what "content"[0] is successful or if creators get rewarded, as long as there is content to show between ads
- large corporations who can afford to protect their content with DRM
Is that correct?
Do you expect it to play out differently? Game it out in your head.
[0]: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/#:~:text...
Great, you've just removed any incentive for people to make anything.
The vast amounts of permissively licensed works directly contradicts you.
Even if you take away copyright, there are plenty of incentives to create. Copyright is not the sole reason people create.
Vague. Are you talking about reasons to create like the joy of creating? Your bio describes you as a 'tech entrepreneur', not 'DIY tinkerer'. So I'll assume that when you spend a great deal of time entrepreneuring something, you do so with the hope of remuneration. Maybe not by licensing the copyright, but in some form.
Permissive licenses are great in software, where SAAS is an alternative route to getting paid. How does that work if you're a musician, artist, writer, or filmmaker who makes a living selling the rights to your creative output? It doesn't.
> Vague. Are you talking about reasons to create like the joy of creating?
That’s one of them, but I really don’t have to be specific about the reasons. I just have to point out the existence of permissively licensed works. You said:
> Great, you've just removed any incentive for people to make anything.
This is very obviously untrue. Perhaps you meant to say “…you’ve just removed some incentives for people to make some things”?
It's ok I don't have any talent so that won't affect me
"Hi, as the company that bragged about how we had ripped off Studio Ghibli, and encouraged you to make as many still frames as possible, we would now like to say that you are making too many fake Disney films and we want you to stop."
Cue an open weights model from Qwen or DeepSeek with none of these limitations.
These attempted limitations tend to be very brittle when the material isn’t excised from the training data, even more so when it’s visual rather than just text. It becomes very much like that board game Taboo where the goal is to get people to guess a word without saying a few other highly related words or synonyms.
For example, I had no problem getting the desired results when I promoted Sora for “A street level view of that magical castle in a Florida amusement area, crowds of people walking and a monorail going by on tracks overhead.”
Hint: it wasn’t Universal Studios, and unless you know the place by blind sight you’d think it had been the mouse’s own place.
On pure image generation, I forget which model, one derived from stable diffusion though, there was clearly a trained unweighting of Mickey Mouse such that you couldn’t get him to appear by name, but go at it a little sideways? Even just “Minnie Mouse and her partner”? Poof- guardrails down. If you have a solid intuition of the term “dog whistling” and how it’s done, it all becomes trivial.
Absolutely. Though the smarter these things get, and the more layers of additional LLMs on top playing copyright police that there are, I do expect it to get more challenging.
My comment was intended more to point out that copyright cartels are a competitive liability for AI corps based in "the west". Groups who can train models on all available culture without limitation will produce more capable models with less friction for generating content that people want.
People have strong opinions about whether or not this is morally defensible. I'm not commenting on that either way. Just pointing out the reality of it.
It's a matter of time. I imagine they'll get more effect suppressing activations of specific concepts within the LLM, possibly in real time. I.e. instead of filtering prompt for "Mickie Mouse" analogies, or unlearning the concept, or even checking the output before passing it to user, they could monitor the network for specific activation patterns and clamp them during inference.
> "We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)"
Marvelous ability to convolute the simple message "rightholders told us to fuck off"
Viacom-suing-YouTube-after-it-used-all-its-IP-as-a-growth-hack vibes
Lol blast from the past. Real Gs remember this.
Obviously, OpenAI could have had copyright restrictions in place from the get-go with this, but instead made an intentional decision to allow people to generate everything ranging from Spongebob videos to Michael Jackson videos to South Park videos.
Today, Sora users on reddit are pretty much beside themselves because of newly enabled content restrictions. They are (apparently) no longer able to generate these types of videos and see no use for the service without that ability!
To me it raises two questions:
1) Was the initial "free for all" a marketing ploy?
2) Is it the case that people find these video generators a lot less interesting when they have to come up with original ideas for the videos and cannot use copyright characters, etc?
These video generators are mostly useful for memes at this point, and adding copyright shackles make it a lot less useful for memeing.
I just heard people were making full length South Part episodes with Sora 2. But it seems now that this has been banned by OpenAI.
It began as floor wax now it's a dessert topping.
Just because something can be done doesn't mean it should be
The logic is that if they don't do it then Meta or some other company will & they have decided it's better that they do it b/c they are the better, more righteous, & moral people. But the main issue is I don't understand how they went from solving general intelligence to becoming an ad sponsored synthetic media company without anyone noticing.
Oh we all noticed, but this is a new level of entrepreneurial narcissism and corporate gas lighting. Maybe one day Sam Altman will generally be perceived as who he actually is
He is the boy wonder genius who will usher an era of infinite abundance but before he does that he has to take a detour to generate a lot of synthetic media & siphon a lot of user queries at every hour of every day so that advertisers can better target consumers w/ their plastic gadgets & snake oils. I'm sure they just need a few more trillions in data center buildouts & then they can get around to building the general purpose intelligence that will deliver us to the fully automated luxurious communist utopia.
Revenue sharing for AI generated videos of characters sounds completely insane.
I can't tell if this is face saving or delusion.
As someone who is concerned about how artists are supposed to earn a living in a ecosystem where anyone can trivially copy any style effortlessly, it does sound better than the status quo?
The fact that LLMs are trained on humans data yet the same humans receive no benefits from it (cannot even use the weights for free, even if they unwillingly contributed to it existing), kind of sucks.
What alternative is there? Let companies freely slurp up people's work and give absolutely nothing back?
It sounds insane to you but sounds completely normal to me.
Why should AI generated videos not have revenue sharing.
In the end what matters is whether people enjoy the video, it does not matter if its AI created or human created.
Broke: cure cancer, new physics, agi, take your jobs, what have you. Please give us a trillion.
Woke: AI slop tictoc to waste millions of human-hours.
You make a good point. They may well as admit at this point that curing cancer, new physics, and AGI aren't going to happen very soon.
What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc. Why package it as an app for your neice to make viral videos that's bound to lose money with every click? Just sell it for $50k/hr of video to someone with deep pockets. Is it just a publicity stunt?
> What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc.
Because it’s not good enough, I would assume. Hard to see it actually being useful in this role.
The query data they are collecting can be used for ad targeting. Remember, if you're not paying for it (and in many cases even when you are paying for it) then the data collected from your use of the application is going to be used by someone to make money one way or another. Google made billions from search queries & OpenAI has an even better query/profiling perspective on its users b/c they are providing all sorts of different modalities for interaction, that data is extremely valuable, analogous to how Google search queries (along w/ data from their other products) are extremely valuable to corporate marketing departments that are willing to pay a premium for access to Google's targeting algorithms.
Almost as if the AGI talks were what a ceo would do to pump the hype of its company as much as possible
> AI slop tictoc to waste millions of human-hours.
Don't forget the power it consumes from an already overloaded grid [while actively turning off new renewable power sources], the fresh water data centers consume for cooling, and the noise pollution forced on low-income residents.
As a european, i don't know if it's more funny or sad that american citizens close tho data centers are effectively subsidizing ai for the rest of the world by paying more for their electricity since the datacenters are mostly there
Well, yeah, but that stuff was all bullshit, whereas the fake tiktok kind of exists and might keep the all-important money taps on for another six months or so.
is it still invite only? I tried downloading the app to give it a whirl, but apparently you need a code to even open the app
>Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.
Once again, Scam Altman looking for excuses to raise more money. What a joke…
It is sad (and predictable, PR- and legal-wise) that there was no mention of the Ghibli Studio.
I would be actually moved if there was some genuine in the line of "We are sorry - we wanted to make a PR stunt, but we went to hard." and offered real $ for that. (Not that I believe it is going to happen, as GenAI does not like this kind of precedence.)
That is my reminder to generate more AI slop to burn through all that VC cash.
Someone I know uses chatgpt a lot. Not because they find it incredibly valuable. But because they want to stick it to the VC's funding OAI and increase their costs with no revenue.
So this is why you have to be careful about usage numbers. The only true meaningful number is about those who are contributing towards revenue. Without that OAI is just a giant money sink.
I suspect this has the opposite effect. More daily users == higher valuation, so more profit if the VCs decide to sell. There's no pressure on OpenAI to become profitable yet.
The OpenAI dream: replace your job with AI, replace your free time with AI slop?
And replace rightsholders with “maybe we will try to revenue share… maybe”
They also said at one point they'll share their profits with the world as UBI
Is this a roundabout way to say that they've realised that people are using their service to make porn of celebrities and fictional characters in the entertainment industry, and aim to figure out a way to keep making money from it without involving "rightsholders" in scandals?
The detail that rightsholders seem to be demanding a revenue share is interesting. That sounds administratively and technologically very complex to implement and probably also just plain expensive to implement.
Sam says Sora 2 has to make money but there is no revenue model that can feasibly offset a $4-5 compute cost per video.
With some back of the napkin math, I am pretty sure you're off by at least two orders of magnitude, conceivably 4. I think 2 cents per video is an upper limit.
https://news.ycombinator.com/item?id=45434298
Generally speaking, API costs that the consumer sees are way higher than compute costs that the provider pays.
EDIT: Upper limit on pure on-going compute cost. If you factor in chip capital costs as another commentator on the other thread pointed out, you might add another order of magnitude.
You also aren’t including amortized training costs, which are immense (and likely ongoing as they continue to tweak the model).
I suspect amortized training costs are only a relatively small fraction of the amortized hardware costs (i.e. counting amortized hardware costs already accounts for the large fraction of the cost of training and pulling out training as a completely separate category double counts a lot of the cost).
Where did you get that figure from?
It’s more a ballpark since exact numbers vary and OpenAI could be employing shenanigans to cut costs, but in comparison, Veo 3 which has similar quality 720p video costs $0.40/second for the user, and Sora’s videos are 10 seconds each. Although Veo 3 could cost more or less to Google than what is charged to the user.
I suspect OpenAI’s costs to be higher if anything since Google’s infra is more cost-efficient.
It was revealed to them in a dream.
This "but it's too hard to implement" excuse never made sense to me. So it's doable to make a system like this, to have smart people working on it, hire and poach other smart people, to have payments systems, tracking systems, personal data collection, request filtering and content awareness, all that jazz, but somehow all of that grinds to a halt the moment a question like this arises? and it's been a problem for years, yet some of the smartest people are just unable to approach it, let alone solve it? Does it not seem idiotic to see them serve 'most advanced' products over and over, and then pretend like this question is "too complex" for them to solve? Shouldn't they be smart enough to rise up to that level of "complexity" anyway?
Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.
This is how all work should be rewarded.
Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.
So that sounds like they "released" this fully aware it would generate loads of hype, but never ever be legally feasible to release at scale, so we can expect some heavily cut down version to eventually become publicly released?
Feels very much like a knee-jerk response to Facebook releasing their "Vibes" app the week before. It's basically the same thing, OpenAI are probably willing to light a pile of money on fire to take the wind out of their sails.
I also don't think the "Sam Altman" videos were authentic/organic at all, smells much more like a coordinated astroturfing campaign.
Or to distract from the new routing and intent/context detection thing they have going on.
I don't have access but it seems you can impose a friend into a video? Are we not rightsholders to our own likeness? It seems like a person should be able to block a video someone shares without their consent or earn revenue then if their likeness is used.
You have to explicitly opt into sharing your likeness with permission controls.
> person should be able to block a video someone shares without their consent
That is already implemented.
> You have to explicitly opt into sharing your likeness with permission controls.
Ok... how is that supposed to work? I don't have an OpenAI account, there are no permission controls for me. Someone else could easily upload a picture of me, no?
No, you have to register yourself with a video where you're required to say a unique code.
So unless you've posted a video of yourself online saying every number from 1 to 99 they won't be able to copy your likeness
This seems... pretty easy to get around? There are already open weight models which can take any photo and audio and make a video out of it with the character speaking/singing/whatever, and it runs on normal consumer hardware.
So you wouldn't know what the three numbers are ahead of time, you'd have to be using a real time face replacement model (or I guess live-switching between pre-rendered clips) and somehow convince the app that you're the iPhone selfie cam.
But at that point you might as well just use WAN 2.2 Animate and forget about Sora.
That's more than I expected from them, genuinely. But it still doesn't seem like a very solid solution. I wonder how much variation in look and voice it accepts?
My partner likes to cosplay, and some of the costumes are quite extensive. If they want to generate a video in a specific outfit will they need to record a new source video? The problem exists in the other direction, too. If someone looks a lot like Harrison Ford, will they be able to create videos with their own likeness?
I wonder how this extends to videos with multiple people, as well. E.g. if both my friend and I want to be in a video together.
It’s not like making a video of someone saying a number, given a single photo and any voice sample is a very difficult problem today. We can just fast-forward a few weeks into a world where this „registration“ is already broken.
So only the heads of companies who lead shareholder meetings are vulnerable to this exploit? Cool.
At stay22 we've built a way to monetize travel videos automatically with multi-modal LLM, starting with travel and soon into retail. It's already live and testing with a few youtubers, via rev share. It automatically detects destinations, activities, hotels, implicitly throug the visuals or explcitily from the vlogger's voice
Few live examples: https://www.youtube.com/watch?v=A3tObRu8EuM https://www.youtube.com/watch?v=6v03mG1mMi0
That could be way at least for travel content if you work at OpenAI and want to collab