NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
YouTube caught making AI-edits to videos and adding misleading AI summaries (ynetnews.com)
randycupertino 5 days ago [-]
A makeup influencer I follow noticed youtube and instagram are automatically adding filters to his face without permission to his videos. If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.

https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...

The screenshots/videos of them doing it are pretty wild, and insane they are editing creators' uploads without consent!

Aurornis 5 days ago [-]
The video shown as evidence is full of compression artifacts. The influencer is non-technical and assumes it's an AI filter, but the output is obviously not good quality anywhere.

To me, this clearly looks like a case of a very high compression ratio with the motion blocks swimming around on screen. They might have some detail enhancement in the loop to try to overcome the blockiness which, in this case, results in the swimming effect.

It's strange to see these claims being taken at face value on a technical forum. It should be a dead giveaway that this is a compression issue because the entire video is obviously highly compressed and lacking detail.

popalchemist 5 days ago [-]
You obviously didn't watch the video, the claims are beyond the scope of compression and include things like eye and mouth enlargement, and you can clearly see the filter glitching off on some frames.
cromka 4 days ago [-]
Someone in the comments explained that this effect was in auto translated videos. Meta and YT apparently use AI to modify the videos to have people match the language when speaking. Which is a nightmare on its own, but not exactly the same.
veeti 4 days ago [-]
I've come across these auto translated videos while traveling, and actually found them quite helpful. Lot of local "authentic" content that I wouldn't have seen otherwise.
bootsmann 4 days ago [-]
Its all kinds of annoying if you’re bilingual. Youtube now autotranslates ads served in my mother tongue to English and I have not found a way to turn it off.
beezlewax 4 days ago [-]
This is an incredibly annoying feature
veeti 4 days ago [-]
I have uBlock Origin.
taegee 4 days ago [-]
Set your mother tongue to some esoteric language Google has not enough training data on. Then it defaults to the original language.
machomaster 3 days ago [-]
Unfortunately it also changes the interface. And I would like my interface be in English.
beAbU 3 days ago [-]
This is how I got rid of ads on facebook!
sofixa 4 days ago [-]
I really hate them. Once again, Google have completely failed to consider multi-lingual people. Like Google search, even if you explicitly tell it what languages it should show results in, it's often wrong and only gives results in Russian when searching in Cyrillic, even for words that do not exist in Russian but do in the language defined in the settings.

Also the voice is pretty unemotional and nothing to do with the original voice. And it being a default that you can't even seem to disable...

TRiG_Ireland 4 days ago [-]
Last night, I came across a video with a title in English and an "Autodubbed" tag. I assumed it would be dubbed into English (my language) from some other language. But it wasn't. It was in French, and clearly the creator's original voice. The automatic subtitles were also in French. I don't know what the "Autodubbed" tag meant, but clearly something wasn't working.

I am by no means fluent in French, but I speak it well enough to get by with the aid of the subtitles, so that was fine. In an ideal world, I'd have the original French audio with English subtitles, but that did not appear to be an option.

jack_pp 4 days ago [-]
Recently they added a setting for default language
watwut 4 days ago [-]
I dont want default language. I understand multiple of them. And it is even ridiculous that I have to set it up.

Provide option to turn on that bad quality dubbing for those few people that want it.

And for crist sake, be transparent instead of auto translating titles so I dont even know the video was not made that badly by its original author.

hulitu 1 days ago [-]
> I dont want default language. I understand multiple of them. And it is even ridiculous that I have to set it up.

For Silicone Valey, it is difficult to comprehend, that people may be speaking more than 1 language. That's why you get programs in other languages than intended (phone set to English - get the English version of app) or they "offer" ( ok/not now) to translate.

sofixa 4 days ago [-]
But I'm fluent in multiple, and wouldn't want a video in a language I'm fluent in to be shittily AI dubbed to another language.
baobun 4 days ago [-]
As if that's a solution. I feel we need a "Falsehoods programmers believe about language" campaign.
pwdisswordfishy 3 days ago [-]
Falsehoods Americans think about language.
baobun 3 days ago [-]
That list would be incomplete. Americans at least don't tend to "helpfully" automatically proxy their whole site through Google Translate when they detect foreign IPs.
machomaster 3 days ago [-]
The most baffling thing is that we aren't talking about Hurrah-Americans here. We are talking about Google, which is full of Indians on all levels of the company. They, if anyone, should have understanding of multilingual people, and yet... such an incredible mess, which is still not fixed after many months.
hulitu 1 days ago [-]
I bet you never used Microsoft or Mozilla products. /s
account42 2 days ago [-]
> about: Just trolling.

Hmmmmm.

xboxnolifes 5 days ago [-]
There are some very clear examples elsewhere. It looks as if youtube applied AI filters to make compression better by removing artifacts and smoothing colors.
Aurornis 5 days ago [-]
> There are some very clear examples elsewhere.

Such as?

This seems like such an easy thing for someone to document with screenshots and tests against the content they uploaded.

So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening? If this is common, please share some examples (that aren't in Instagram reel format from non-technical influencers)

maxbond 5 days ago [-]
> So why is the top voted comment an Instagram reel of a non-technical person trying to interpret what's happening?

It's difficult for me to read this as anything other than dismissing this person's views as being unworthy of discussing because they are are "non-technical," a characterization you objected to, but if you feel this shouldn't be the top level comment I'd suggest you submit a better one.

Here's a more detailed breakdown I found after about 15m of searching, I imagine there are better sources out there if you or anyone else cares to look harder: https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...

To me it's fairly subtle but there's a waxy texture to the second screenshot. This video presents some more examples, some of them have are more textured: https://www.youtube.com/watch?v=86nhP8tvbLY

ffsm8 5 days ago [-]
Upscaling and even de-noising is something very different to applying filters to increase size of lips/eyes...
maxbond 5 days ago [-]
It's a different diagnosis, but the problem is still, "you transformed my content in a way that changes my appearance and undermines my credibility." The distinction is worth discussing but the people levying the criticism aren't wrong.

Perhaps a useful analogy is "breaking userspace." It's important to correctly diagnose a bug breaking userspace to ship a fix. But it's a bug if its a change that breaks userspace workflows, full stop. Whether it met the letter of some specification and is "correct" in that sense doesn't matter.

If you change someone's appearance in your post processing to the point it looks like they've applied a filter, your post processing is functionally a filter. Whether you intended it that way doesn't change that.

hombre_fatal 4 days ago [-]
Well, this was the original claim: > If his content was about lip makeup they make his lips enormous and if it was about eye makeup the filters make his eyes gigantic. They're having AI detecting the type of content and automatically applying filters.

No need to downplay it.

maxbond 4 days ago [-]
I didn't downplay it, I just wasn't talking about that at all. The video I was talking about didn't make that claim, and I wasn't responding to the comment which did. I don't see any evidence for that claim though. I would agree the most likely hypothesis is some kind of compression pipeline with an upsampling stage or similar.

ETA: I rewatched the video to the end, and I do see that they pose the question about whether it is targeted at certain content at the very end of the video. I had missed that, and I don't think that's what's happening.

DonHopkins 4 days ago [-]
As a makeup technician who looks in the mirror a lot, he's technically skilled at recognizing his own face.
3 days ago [-]
whstl 4 days ago [-]
The TFA.

Rheet Shull's video is quite high quality and shows it.

When it was published I went to Youtube's website and saw Rick Beato's short video mentioned by him and it was clearly AI enhanced.

I used to work with codec people and have them as friends for years so what TFA is talking about is definitely not something a codec would do.

bbarnett 4 days ago [-]
In the best of gaslighting and redirection, Youtube invents a new codec with integrated AI, thus vastly complicating your ability to make this point.

After posting a cogent explanation as to why integrated AI filtering is just that, and not actually part of the codec, Youtube creates dozens of channels with AI-generated personalities, all explaining how you're nuts.

These channels and videos appear on every webpage supporting your assertions, including being top of results on search. Oh, and AI summaries on Google searxh, whenever the top is searched too.

5 days ago [-]
maxbond 5 days ago [-]
This is an unfair analysis. They discuss compression artifacts. They highlight things like their eyes getting bigger which are not what you usually expect from a compression artifact.

If your compression pipeline gives people anime eyes because it's doing "detail enhancement", your compression pipeline is also a filter. If you apply some transformation to a creator's content, and then their viewers perceive that as them disingenuously using a filter, and your response to their complaints is to "well actually" them about whether it is a filter or a compression artifact, you've lost the plot.

To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way. Regardless of whether they are technical, they are living in a world increasingly intermediated by technology, and we should be listening to their feedback on it. They have to live with the consequences of our design decisions. If we believe them to be non-technical, we should extend a lot of generosity to them in their use of terminology, and address what they mean instead of nitpicking.

Aurornis 5 days ago [-]
> To be honest, calling someone "non-technical" and then "well actually"ing them about hair splitting details when the outcome is the same is patronizing, and I really wish we wouldn't treat "normies" that way.

I'm not critiquing their opinion that the result is bad. I also said the result was bad! I was critiquing the fact that someone on HN was presenting their non-technical analysis as a conclusive technical fact.

Non-technical is describing their background. It's not an insult.

I will be the first to admit I have no experience or knowledge in their domain, and I'm not going to try to interpret anything I see in their world.

It's a simple fact. This person is not qualified to be explaining what's happening, yet their analysis was being repeated as conclusive fact here on a technical forum

maxbond 5 days ago [-]
"The influencer is non-technical" and "It's strange to see these claims being taken at face value on a technical forum," to me, reads as a dismissal. As in, "these claims are not true and this person doesn't have the background to comment." Non-technical doesn't need to be an insult to be dismissive. You are giving us a reason not to down weight their perspective, but since the outcome is the same regardless of their background, I don't think that's productive.

I don't really see where you said the output was "bad," you said it was a compression artifact which had a "swimming effect", but I don't really see any acknowledgement that the influencer had a point or that the transformation was functionally a filter because it changed their appearance above and beyond losing detail (made their eyes bigger in a way an "anime eyes" filter might).

If I've misread you I apologize but I don't really see where it is I misread you.

panxyh 4 days ago [-]
The outcome is visible and not up for discussion, so is the fact that this is a problem for the influencer.

He's getting his compassionate nodding and emotional support in the comments over there.

I agree that him being non-technical shouldn't be discussion-ending in this case, but it is a valid observation, wether necessary or not.

maxbond 4 days ago [-]
I'm not commenting on Instagram, I'm not asking anyone to provide this random stranger with emotional support, and I'm not disputing that the analysis was non technical.
panxyh 5 days ago [-]
The difference is wether the effect is intentional or not.

"Non-technical" isn't an insult.

What you call "well actually"ing is well within limits on a technical forum.

maxbond 5 days ago [-]
From a technical standpoint it's interesting whether it's deliberate and whether it's compression, but it's not a fair criticism of this video, no. Dismissing someone's concerns over hair splitting is text book "well actually"ing. I wouldn't have taken issue to a comment discussing the difference from a perspective of technical curiosity.
Aurornis 5 days ago [-]
> Dismissing someone's concerns

I agreed that the output was bad! I'm not dismissing their concerns, I was explaining that their analysis was not a good technical explanation for what was happening.

4 days ago [-]
reactordev 5 days ago [-]
I can hear the ballpoint pens now…

This is going to be a huge legal fight as the terms of service you agree to on their platform is “they get to do whatever they want” (IANAL). Watch them try to spin this as “user preference” that just opted everyone into.

api 5 days ago [-]
That’s the rude awakening creators get on these platforms. If you’re a writer or an artist or a musician, you own your work by default. But if you upload it to these platforms, they own it more or less. It’s there in the terms of service.
gessha 5 days ago [-]
What are they going to do though, go to one of the ten competing video hosting platforms?
Bombthecat 4 days ago [-]
Yeah, we decided there is only YouTube and only YouTube.

Also, no one else can bear the shear amount of traffic and cost

mhdhn 4 days ago [-]
nonexistent
5 days ago [-]
sodapopcan 5 days ago [-]
What if someone else uploads your work?
benoau 5 days ago [-]
Section 230 immunity for doing whatever they want, as long as they remove it if you complain.
mitthrowaway2 5 days ago [-]
Do they also remove it from the AI model weights they trained on it while it was uploaded?
api 4 days ago [-]
No.
weird-eye-issue 5 days ago [-]
One of the comments on IG explains this perfectly:

"Meta has been doing this; when they auto-translate the audio of a video they are also adding an Al filter to make the mouth of who is speaking match the audio more closely. But doing this can also add a weird filter over all the face."

I don't know why you have to get into conspiracy theories about them applying different filters based on the video content, that would be such a weird micro optimization why would they bother with that

eloisius 4 days ago [-]
I doubt that’s what’s happening too but it’s not beyond the pale. They could be feeding both the input video and audio/transcript into their transformer and it has learned “when the audio is talking about lips the person is usually puckering their lips for the camera” so it regurgitates that.
Irishsteve 4 days ago [-]
Some random team or engineer does it to get a promo.
machomaster 3 days ago [-]
Google has done so many incredibly stupid things, like autotranslating titles/information/audio from a language I already know into English, with no way to turn this off.

Assuming that they did something technically impressive, but stupid again is not a conspiracy, but a reasonable assumption based on previous behavior.

ThePowerOfFuet 3 days ago [-]
Here's that link without the tracking linking you to everyone who clicks on it:

https://www.instagram.com/reel/DO9MwTHCoR_

methuselah_in 5 days ago [-]
There is no option to turn that off? Or they even don't publish those things anywhere??
adzm 5 days ago [-]
This is ridiculous
echelon 5 days ago [-]
[flagged]
plorg 5 days ago [-]
If any engineers think that's what they're doing they should be fired. More likely it's product managers who barely know what's going on in their departments except that there's a word "AI" pinging around that's good for their KPIs and keeps them from getting fired.
echelon 5 days ago [-]
[flagged]
asveikau 5 days ago [-]
Videos are expensive to store, but generative AI is expensive to run. That will cost them more than storage allegedly saved.

To solve this problem of adding compute heavy processing to serving videos, they will need to cache the output of the AI, which uses up the storage you say they are saving.

echelon 5 days ago [-]
https://c3-neural-compression.github.io/

Google has already matched H.266. And this was over a year ago.

They've probably developed some really good models for this and are silently testing how people perceive them.

hatmanstack 5 days ago [-]
If you want insight into why they haven't deleted "old garbage" you might try, The Age of Surveillance Capitalism by Zuboff. Pretty enlightening.
echelon 5 days ago [-]
I'm pretty sure those 12 year olds uploading 24 hour long Sonic YouTube poops aren't creating value.
theendisney 5 days ago [-]
1000 years from now those will be very important. A bit like we are now wondering what horrible food average/poor people ate 1000 years ago.
sgerenser 4 days ago [-]
I’m afraid to search… what exactly is a “24 hour long sonic Youtube poop?”
jsheard 5 days ago [-]
What type of compression would change the relative scale of elements within an image? None that I'm aware of, and these platforms can't really make up new video codecs on the spot since hardware accelerated decoding is so essential for performance.

Excessive smoothing can be explained by compression, sure, but that's not the issue being raised there.

Aurornis 5 days ago [-]
> What type of compression would change the relative scale of elements within an image?

Video compression operates on macroblocks and calculates motion vectors of those macroblocks between frames.

When you push it to the limit, the macroblocks can appear like they're swimming around on screen.

Some decoders attempt to smooth out the boundaries between macroblocks and restore sharpness.

The giveaway is that the entire video is extremely low quality. The compression ratio is extreme.

echelon 4 days ago [-]
They're doing something with neural compression, not classical techniques.

https://blog.metaphysic.ai/what-is-neural-compression/

See this paper:

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.

Here's one such Google paper:

https://c3-neural-compression.github.io/

echelon 5 days ago [-]
AI models are a form of compression.

Neural compression wouldn't be like HVEC, operating on frames and pixels. Rather, these techniques can encode entire features and optical flow, which can explain the larger discrepancies. Larger fingers, slightly misplaced items, etc.

Neural compression techniques reshape the image itself.

If you've ever input an image into `gpt-image-1` and asked it to output it again, you'll notice that it's 95% similar, but entire features might move around or average out with the concept of what those items are.

jsheard 5 days ago [-]
Maybe such a thing could exist in the future, but I don't think the idea that YouTube is already serving a secret neural video codec to clients is very plausible. There would be much clearer signs - dramatically higher CPU usage, and tools like yt-dlp running into bizarre undocumented streams that nothing is able to play.
echelon 5 days ago [-]
A new client-facing encoding scheme would break utilization of hardware encoders, which in turn slows down everyone's experience, chews through battery life, etc. They won't serve it that way - there's no support in the field for it.

It looks like they're compressing the data before it gets further processed with the traditional suite of video codecs. They're relying on the traditional codecs to serve, but running some internal first pass to further compress the data they have to store.

planckscnst 5 days ago [-]
If they were using this compression for storage on the cache layer, it could allow more videos closer to where they serve them, but they decide the. Back to webm or whatever before sending them to the client.

I don't think that's actually what's up, but I don't think it's completely ruled out either.

jsheard 5 days ago [-]
That doesn't sound worth it, storage is cheap, encoding videos is expensive, caching videos in a more compact form but having to rapidly re-encode them into a different codec every single time they're requested would be ungodly expensive.
LoganDark 5 days ago [-]
Storage gets less cheap for short-form tiktoks where the average rate of consumption is extremely high and the number of niches is extremely large.
throwaway5465 4 days ago [-]
The law of entropy appears true of TikToks and Shorts. It would make sense to take advantage of this. That is to say, the content becomes so generic that it merges into one.
justinclift 5 days ago [-]
The resources required for putting AI <something> inline in the input (upload) or output (download) chain would likely dwarf the resources needed for the non-AI approaches.
eloisius 4 days ago [-]
One that represented compressed videos as an embedding that gets reinflated by having gen AI interpret it back into image frames.
jazzyjackson 5 days ago [-]
Totally. Unfortunately it's not lossless and instead of just getting pixelated it's changing the size of body parts lol
glitchc 5 days ago [-]
Probably compression followed by regeneration during decompression. There's a brilliant technique called "Seam Carving" [1] invented two decades ago that enables content aware resizing of photos and can be sequentially applied to frames in a video stream. It's used everywhere nowadays. It wouldn't surprise me that arbitrary enlargements are artifacts produced by such techniques.

[1] https://github.com/vivianhylee/seam-carving

j45 5 days ago [-]
It could be, but if compression is codecs, usually new codecs get talked about on a blog.
Groxx 5 days ago [-]
I largely agree, I think that probably is all that it is. And it looks like shit.

Though there is a LOT of room to subtly train many kinds of lossy compression systems, which COULD still imply they're doing this intentionally. And it looks like shit.

5 days ago [-]
JumpCrisscross 5 days ago [-]
> This is an experiment

A legal experiment for sure. Hope everyone involved can clear their schedules for hearings in multiple jurisdictions for a few years.

echelon 5 days ago [-]
As soon as people start paying Google for the 30,000 hours of video uploaded every hour (2022 figure), then they can dictate what forms of compression and lossiness Google uses to save money.

That doesn't include all of the transcoding and alternate formats stored, either.

People signing up to YouTube agree to Google's ToS.

Google doesn't even say they'll keep your videos. They reserve the right to delete them, transcode them, degrade them, use them in AI training, etc.

It's a free service.

7bit 4 days ago [-]
That's the difference between the US and European countries. When you have SO MUCH POWER like Google, you can't just go around and say ItSaFReeSeRViCe in Europe. With great power comes great responsibility, to say it in American words.
JumpCrisscross 4 days ago [-]
> People signing up to YouTube agree to Google's ToS

None of which overrides what the law says or can do.

> It's a free service

I've paid for it. Don't anymore, in large part because of crap like this reducing content quality.

theendisney 5 days ago [-]
Its not the same when you publish something on my platform as when i publish something and put your name on it.

It is bad enough we can deepfake anyone. If we also pretend it was uploaded by you the sky is the limit.

habinero 5 days ago [-]
"They're free to do whatever they want with their own service" != "You can't criticize them for doing dumb things"
rightbyte 4 days ago [-]
Ye it is such a strange and common take. Like, "if you don't like it why complain?".
heddelt 4 days ago [-]
[flagged]
Bombthecat 4 days ago [-]
That's actually hilarious
Habgdnv 5 days ago [-]
An amateur tip that I sometimes use after I reencode something to check what i lost:

ffmpeg -i source.mkv -i suspect.mkv -filter_complex "blend=all_mode=difference" diff_output.mkv

I saw these claims before but still have not found someone to show a diff or post the source for comparison. It would be interesting.

randoomed 4 days ago [-]
Jill Bearup posted a video about this a while ago, showing a short and the original side by side: https://www.youtube.com/watch?v=kd692naF-Cc (note the short is shown at 0:31)

Edit: The changes made by the ai are a lot more vissible in the higher quality video uploaded to patreon: https://www.patreon.com/posts/136994036 (this was also linked in the pinned comment on the youtube video)

AnonHP 4 days ago [-]
It must be my eyes and the small screen on my phone. I couldn’t find any differences in the video on Patreon, which was annoying enough to watch with the actual comparison clip being just a couple of seconds or so, and I had to rewind and check again. I wish it had shown more of the comparisons. Most of the current video was just commentary.
gblargg 4 days ago [-]
Same here, on a big screen, I don't see anything notable. I really hope this isn't a mass delusion because YouTube started applying a sharpness ("edge enhancement") filter to videos to make them look sharper. It sure looks like that to me, because I hate this filter and how so many movie transfers have it added, with the ringing at the edges this filter leaves.
IshKebab 4 days ago [-]
Yeah I also can't see the difference on the high quality video. I am on my phone though tbf.

Also, minus 100 points to Jill for being happy about being able to use AI to automatically edit out all the silence from her videos. That's far more annoying than any barely perceptible visual artifacts.

Why do people think wall-of-text videos are good?

sgerenser 4 days ago [-]
The before/after on this just looks like compression artifacts/smoothing to me.
tetris11 4 days ago [-]
I still can't see the differences in the patreon
sgerenser 4 days ago [-]
It’s because you’re looking for some kind of “smoking gun” AI transformation. In reality it just looks like the YouTube one is more compressed and slightly blurred. Some people are apparently just learning that YouTube recompresses videos.
rpastuszak 4 days ago [-]
Hehe, I occasionally use a similar approach for visual regression testing: https://untested.sonnet.io/notes/visual-snapshot-tests-cheap...
jbaber 4 days ago [-]
Thank you for this good idea and oneliner.
TazeTSchnitzel 5 days ago [-]
The AI filter applied server-side to YouTube Shorts (and only shorts, not regular videos) is horrible, and it feels like it must be a case of deliberate boiling the frog. If everyone gets used to overly smooth skin, weirdly pronounced wrinkles, waxy hair, and strange ringing around moving objects, then AI-generated content will stand out less when they start injecting it into the feed. At first I thought this must be some client-side upscaling filter, but tragically it is not. There's no data savings at all, and there's no way for uploaders or viewers to turn it off. I guess I wasn't cynical enough.
api 5 days ago [-]
I’ve been saying for a while that the end game for addictive short form chum feeds like TikTok and YouTube Shorts is to drop human creators entirely. They’ll be AI generated slop feeds that people will scroll, and scroll, and scroll. Basically just a never ending feed of brain rot and ads.
coliveira 5 days ago [-]
There's already a huge number of AI generated channels in youtube. The only difference is that they're uploaded by channel owners. What's is gonna happen very quickly (if not already) is that Youtube itself will start "testing" AI content that it creates on what will look like new channels. In a matter of a few years they'll promote this "content" to occupy most of the time and views in the platform.
SaberTail 5 days ago [-]
And then they'll start feeding in data like gaze tracking, and adjust the generated content in real time to personalize it to be maximally addictive for each viewer.
add-sub-mul-div 5 days ago [-]
Perhaps the shorter/dumber the medium and format, the less discerning an audience it attracts. We're seeing a split between people who reject the idea of content without the subtext of the human creation behind it, and people who just take content for what it is on the surface without knowing why it should matter how it was created.
eagleinparadise 5 days ago [-]
I buy into this conspiracy theory, it's genius. It's literally a boiling the frog kind of strategy against users. Eventually, everyone will get too lazy to go through the mental reasoning of judging every increasingly piece of content as "is this AI" as you mentally spend energy trying to find clues.

And over time the AI content will improve enough where it becomes impossible and then the Great AI Swappening will occur.

account42 2 days ago [-]
On the one hand it's going to be a horrible dystopia. One the other hand, I am not sad to see "Influencer" cease to be a viable job.
mapmeld 5 days ago [-]
Meta already teased making this ("Vibes") in September. Also OpenAI's homepage for their Sora tool is a bunch of AI video shorts.
bitwize 5 days ago [-]
Yes, but what happens when the AIs themselves begin to brainrot (as happens when they are not fed their usual sustenance of information from humans and the real world)?
api 5 days ago [-]
Have you seen what people watch on these things? It won’t matter. In fact, the surreal incoherent schizo stuff can work well for engagement.
absoluteunit1 5 days ago [-]
> the surreal incoherent schizo stuff can work well for engagement.

There’s already popular subreddits (something blursed ai I think) where people upload this type of content and it’s getting decent engagement it seems

api 4 days ago [-]
That’s a little different. It’s people playing with AI, which is fine.

It seems like a minor difference, but the undifferentiated unlabeled short form addiction feed is much worse.

Reddit has been heading that way though. It hasn’t gone all in yet.

port11 4 days ago [-]
Whenever we open YouTube to play a song for our toddler, we see at least 90% slop Shorts. It's disgusting.
TazeTSchnitzel 7 hours ago [-]
Online content for kids has been mostly slop long before it was viable to use AI to generate it. Society thinks kids will take anything it throws at them, that they have no standards.
AstroBen 5 days ago [-]
I think this will backfire and kill any service that mass implements it. The human to human nature of video is important to the engagement. Even if it becomes such that you can't tell on individual videos, eventually they'll become known for just being AI slop farms as a whole (something that I'm seeing have a lot of backlash)

And what for? Tiktok creators already generate content for them

api 4 days ago [-]
Like other things the market will bisect. Discerning people who are self aware and resist addiction and are put off by this stuff are already jumping ship from these platforms. They’ll be hypnosis machines for everyone else.

There’s a component of people around here who will shrug or even cheer. That’s because we are losing any belief in compassion or optimism about uplifting humanity. When I really think about it, it’s terrifying.

We are headed for a world where normal people just step over the dying, and where mass exploitation of the “weak” gets a shrug or even approval. Already there in some areas and sectors.

Is this the world you want, folks? Unfortunately I’ve met a disturbing number of people who would say yes.

Now consider that this includes children, whose childhood is being stolen by chum feeds. This includes your own relatives. This includes your friend who maybe has a bit of an addictive personality who gets sucked into gambling apps and has their life ruined. Or it’s you, maybe, though I get the sense there’s a lot of “high IQ” people who think they can’t be conned.

phatfish 3 days ago [-]
Yup, online culture is utter trash, and kids are sucked in the most. As seen on HN as well with overwhelming rejection of applying age restrictions to porn sites.

It's like society doesn't exist any more, and adults not having to jump through some hoops to so they can wank off is a price kids have to pay by having hardcore porn easily accessible or a miss-click away.

It's defended with some nonsense about their privacy or "rights". Or a straw man where suddenly all information becomes "censored".

LLM/generative "content" is going to make online culture even worse (if that even seems possible).

ohhellnawman 4 days ago [-]
[dead]
rhetocj23 5 days ago [-]
[dead]
dotancohen 4 days ago [-]
Have you tried viewing the short in the normal YouTube UI? Just copy the short's unique identifier from the URL and replace the unique identifier in any normal UI YouTube video.
amarshall 4 days ago [-]
It’s easier than that. Replace shorts with watch in the URL.
absoluteunit1 5 days ago [-]
Wow - I had not considered this as the intention.
UltraSane 5 days ago [-]
I think they are doing it so they can mask the extreme compression they are doing to YouTube shorts.
nsoqm 4 days ago [-]
[flagged]
5 days ago [-]
chao- 5 days ago [-]
I learned to ignore the AI summaries after the first time I saw one that described the exact OPPOSITE conclusion/stance of the video it purported to summarize.
leobg 4 days ago [-]
It would be nice of them to diffuse the clickbait.

As it is, when a video has a catchy clickbait title, I screenshot the thumbnail and have ChatGPT give me the solution. Or I’ll copy the URL into a transcript fetcher and feed that into Gemini so I can ask specific questions.

He who clickbaits is demoted to the role of “Suggest a topic for me to ask ChatGPT about”.

ssl-3 4 days ago [-]
You know... When I see something that crosses my threshold of clickbait on my YouTube feed, I just select "Not interested" or "Don't recommend channel".

Given the contrast between my usual logged-in Youtube feed, and the rampant sea of unfettered clickbait I see when I've been logged out on any particular device.

He who clickbaits is therefore demoted to the role of seldom or never being seen [by me] at all.

This makes sense: If the algorithm exists to increase viewership and engagement, then the algorithm therefore serves to show me stuff that I'll watch. It does not serve to present to me things that I will not watch.

And it works. I can't even find any clickbaity material right now to reference.

It's pretty great.

matejdro 4 days ago [-]
You can get rid of the clickbait using DeArrow extension: https://dearrow.ajay.app/
Trasmatta 5 days ago [-]
Also I just absolutely hate the tone of them. So obviously AI, and they all have the same structure, ending in "Prepare for a journey through blah blah blah".
superkuh 5 days ago [-]
The citation chain for these mastodon reposts resolves to the Gamers Nexus piece on youtube https://www.youtube.com/watch?v=MrwJgDHJJoE
TheTaytay 5 days ago [-]
Yes! Thank you! He is talking about AI generated summaries being inaccurate, which is plenty to get up in arms about.

A lot of folks here hate AI and YouTube and Google and stuff, but it would be more productive to hate them for what they are actually doing.

But most people here are just taking this headline at face value and getting pitchforks out. If you try to watch the makeup guy’s proof, it’s talking about Instagram (not YouTube), doesn’t have clean comparisons, is showing a video someone sent back to him, which probably means it’s a compression artifact, not a face filter that the corporate overlords are hiding from the creator. It is not exactly a smoking gun, especially for a technical crowd.

jeffbee 5 days ago [-]
I, for one, find it extremely odd that any of these video posters believe they get to control whether or not I use, directly or indirectly, an AI to summarize the video for me.
superkuh 5 days ago [-]
They're under the encouraged belief that they are in control over what is shown on their youtube channel. They think they should control what text is shown under their videos on "their" channel. This illusion of control of presentation has been unconvincing for quite a while but now Alphabet is just throwing around it's weight because there are no other options except youtube for what youtube does: allowing money to flow to people who make videos without the video file host getting sued out of existence. Alphabet does this by mantaining a large standing army of lawyers and a huge money supply. Trivial technical issues like file hosting and network bandwidth have been repeatedly solved by others but when they become popular they're legally attacked and killed.
cyost 5 days ago [-]
data-ottawa 5 days ago [-]
Are these AI filters, or just applying high compression/recompressing with new algorithms (which look like smoothing out details)?

edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...

The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way

randycupertino 5 days ago [-]
It's filters, I posted an example of it below. Here is a link: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
data-ottawa 5 days ago [-]
It's very hard to tell in that instagram video, it would be a lot clearer if someone overlaid the original unaltered video and the one viewers on YouTube are seeing.

That would presumably be an easy smoking gun for some content creator to produce.

There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare.

randycupertino 5 days ago [-]
you can literally see the filters turn on and off making his eyes and lips bigger as he moves his face. It's clearly a face filter.
diputsmonro 5 days ago [-]
To be extra clear for others, keep watching until about the middle of the video where he shows clips from the YouTube videos
scrollop 4 days ago [-]
I would but his right "eyebrow" is too distracting
randycupertino 4 days ago [-]
It's a scar in his eyebrow from a bicycle accident as a child: https://www.facebook.com/watch/?v=2183994895455038
DoctorOW 4 days ago [-]
You're misunderstanding the criticism the video levies. It's not that he tried to apply a filter and didn't like the result, it was applied without his permission. The reason you can't simply upload the unaltered original video, is that's what he was trying to do in the first place.
jeffbee 5 days ago [-]
What would "unaltered video" even mean.
bmicraft 4 days ago [-]
The video before it was uploaded.
ares623 5 days ago [-]
The time of giving these corps the benefit of the doubt is over.
bongodongobob 4 days ago [-]
Wouldn't this just be unnecessary compute using AI? Compression or just normal filtering seems far more likely. It just seems like increasing the power bill for no reason.
michaelt 4 days ago [-]
Video filters aren't a radical new thing. You can apply things like 'slim waist' filters in real time with nothing more than a smartphone's processor.

People in the media business have long found their media sells better if they use photoshop-or-whatever to give their subjects bigger chests, defined waists, clearer skin, fewer wrinkles, less shiny skin, more hair volume.

Traditional manual photoshop tries to be subtle about such changes - but perhaps going from edits 0.5% of people can spot to bigger edits 2% of people can spot pays off in increased sales/engagement/ad revenue from those that don't spot the edits.

And we all know every tech company is telling every department to shoehorn AI into their products anywhere they can.

If I'm a Youtube product manager and adding a mandatory makeup filter doesn't need much compute; increases engagement overall; and gets me a $50k bonus for hitting my use-more-AI goal for the year - a little thing like authenticity might not stop me.

ares623 4 days ago [-]
one thing we know for sure is that since chatgpt humiliated Google, all teams seem to have been given carte blanche freedom to do whatever it takes to make Google the leader again, and who knows what kind of people thrive in that kind of environment. just today we saw what OpenAI is willing to do to eke out any advantage it can.
echelon 5 days ago [-]
The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression

These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)

This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.

skygazer 5 days ago [-]
"My, what big eyes you have, Grandmother." "All the better to compress you with, my dear."
mlvljr 4 days ago [-]
[dead]
randycupertino 5 days ago [-]
Why would data compression make his eyes bigger?
echelon 5 days ago [-]
Because it's a neural technique, not one based on pixels or frames.

https://blog.metaphysic.ai/what-is-neural-compression/

Instead of artifacts in pixels, you'll see artifacts in larger features.

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.

mh- 4 days ago [-]
Like a visual version of psychoacoustic compression. Neat. Thanks for sharing.
anticensor 3 days ago [-]
Then they should improve psychovisual grounding of their compressors by a lot.
mh- 3 days ago [-]
I'm commenting on the paper, not the sensationalist thread it was posted in.
mrandish 4 days ago [-]
Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT. Noise reduction targets high-frequency details, which can look similar to skin smoothing filters.

The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.

I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.

Anon1096 4 days ago [-]
I agree I don't really think there's anything here besides compression algos being tested. At the very least, I'd need to see far far more evidence of filters being applied than what's been shared in the thread. But having worked at social media in the past I must correct you on one thing

>Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs.

You can't know this. Almost everything at YouTube is probably A/B tested heavily and many times you get very surprising results. Applying a filter could very well increase views and time spent on app enough to justify the cost.

lysace 4 days ago [-]
Activism fatigue is a thing today.
brailsafe 5 days ago [-]
Whatever the purpose, it's clearly surreptitious.

> This level of AI paranoia is getting annoying.

Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.

So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?

Aurornis 5 days ago [-]
It's compression artifacts. They might be heavily compressing video and trying to recover detail on the client side.
windex 5 days ago [-]
There are entire fake persona videos these days. Leading scientists, economists, politicians, tech guys, are being impersonated wholesale on youtube.
beezlebroxxxxxx 4 days ago [-]
Yeah, anyone in Canada has seen AI image ads and AI video ads on youtube purporting to feature or include prominent Canadian politicians (current PM and party leaders). Youtube seems to have just wholesale given up on moderating their ad content.
acomjean 5 days ago [-]
I saw this today where "influencers" were taking real doctors from videos and using AI to have them pitch products.

https://www.theguardian.com/society/2025/dec/05/ai-deepfakes...

tartoran 4 days ago [-]
Lawsuit? Seems like a slam dunk to me.
EasyMark 4 days ago [-]
It's hard to sue an unfairly banished Nigerian prince living in South Africa just playing the game, about the best you can do is have their account suspended/deleted. They'll be back a couple hours after that though.
bborud 4 days ago [-]
I don't understand why Youtube would do this. Both applying these kinds of "enhancements" to video and to do so without consent or even informing people. How is this a smart move?

We need more people experimenting with creating a better platform for content creators. Not least so people like Beato, but not as well known, don't constantly get harassed by fraudulent and incorrect copyright infringement claims.

conartist6 4 days ago [-]
It only takes one ambitious person
onion2k 4 days ago [-]
...to drive away the content producers that are key to the platform's success?
krapp 4 days ago [-]
The content producers actually bringing in money are all in on AI. Most people on the platform are a net negative for Google.
conartist6 4 days ago [-]
Exactly. I thought it was fun though that the same statement could just as well apply to creating the platform those producers go to.
cindyllm 4 days ago [-]
[dead]
krapp 4 days ago [-]
They probably believe these "enhancements" will increase engagement.

OR they need to justify the mountain of money they burned on AI somehow.

Also there are alternatives to Youtube in the Fediverse like PeerTube.

stainablesteel 4 days ago [-]
yeah it's foolish, the platform should just remain the platform, any attempt at improving engagement should be done by the creators or else this whole system would slowly collapse from frustration and move to a different website
mannanj 4 days ago [-]
If you make all content look like AI generated content, it normalizes AI generated content more and pushes their AI slop and AI generation products.
ycombigrator 5 days ago [-]
Youtube is an AI training data set.

There is no way Google thinks it's in their interest to serve up clean data to anyone but themselves.

Animats 5 days ago [-]
I'm seeing Youtube summary pictures which seem to be AI-generated. I was looking at [1], which is someone in China rebuilding old machines, and some of the newer summary pictures are not frames from the video. They show machines which are the sort of thing you might get by asking a Stable Diffusion type generator to generate a picture from the description.

[1] https://www.youtube.com/@linguoermechanic

sysworld 4 days ago [-]
Pretty sure this is a feature the uploader can use for YouTube to generate an (ai) thumbnail. I saw it on my channel once, but can't find it back now.

You can see it on many MANY channel thumbnails now. At least with the people I follow. I'm not a fan.

Animats 2 days ago [-]
I'm seeing it on more videos now, and it's awful. It's just making stuff up. Badly.

Picking out the most relevant frame in the video was better.

delichon 5 days ago [-]
YouTube should keep their grubby hands off. And give that capability to us instead. I want the power to do personal AI edits built in. Give me a prompt line under each video. Like "replace English with Gaelic", "replace dad jokes with lorem ipsum", "make the narrator's face 25% more symmetrical", "replace the puppy with a xenomorph", "change the setting to Monument Valley", etc.
someothherguyy 5 days ago [-]
i wonder how many years (decades?) out this is still. it would be wild to be able to run something like that locally in a browser. although, it will probably be punishable by death by then.
absoluteunit1 5 days ago [-]
This made me chuckle lol
AmbroseBierce 5 days ago [-]
Talking about AI, Google, and shady tactics, I wouldn't be surprised if soon we discover they are purposefully adding video glitches (deformed characters and so on) in the first handful of iterations when using Veo video generation just so people gets used to trying 3 or 4 times before they receive a good one.
VTimofeenko 5 days ago [-]
Well the current models that cost per output sure love wasting those tokens on telling me how I am the greatest human being ever that only asks questions which get to the very heart of $SUBJECT.
AmbroseBierce 5 days ago [-]
You are right! Would you like me to pretend I'm able to generate better responses if you just give me more input but will end up just wasting your time and your money? And with some luck when you inevitably end up frustrated you will conclude that it was your fault for not giving me good enough input and not mine for being unable to generate good output, in other words that to you just need to get better at GPTing.
constantcrying 4 days ago [-]
The insanity of YouTube is their absolute dedication to forcefully introduce features nobody wants and to neglect all aspects of the site which are in desperate need of fixing.

The videos on the start page are still misaligned. Which looks almost hilariously amateurish.

5 days ago [-]
j45 5 days ago [-]
This is wild.

I wonder if it will end up being treated as part of a codec instead of edits to the base film, and can then be re-run to undo the video's?

It feels like there needs to be a way to verify that what you uploaded is what's on the site.

silexia 4 days ago [-]
What alternatives do we have to youtube for creators?
KellyCriterion 3 days ago [-]
The interesting thing is: They do this also in some of the EU countries, it looks to me.
jeeeb 5 days ago [-]
I really hate all the AI filters in videos. It makes everyone look like fake humans. I find it hard to believe that anyone would actually prefer this.
halapro 5 days ago [-]
I don't find it hard to believe at all. Have you seen all the "240fps TVs" being sold for the past 15 years? They all apply some weird fake smoothing and people prefer them.
mark_l_watson 4 days ago [-]
Their edits on YouTube shorts are hideous but at least it is 100% obvious that artificial edits were applied.

I have a funny attitude towards Google: I am a big privacy nut, have read the principle books on privacy, etc. That said, usually running all Google web properties in DuckDuckGo web browser, I tweak my privacy settings, etc. and then still use Google properties.

YouTube is probably the highest value Google property for me (despite Gemini use, love using Google Cloud Platform, etc.)

I find that the availability of an infinite number of Qi Gong exercise videos, philosophy, tiny bit of politics, science, and nature videos that is it almost infinitely better than HBO, Netflix, etc. I am a paid subscriber to all these services so I am comparing Apples to Apples here.

I do hate spending 10 seconds opening a video and realizing that it was created artificially, but I immediately stop watching it so the overhead isn’t too bad.

One new feature I really like is if I am watching a long philosophy or science video, I paste the URI into Gemini and ask for a summary and to use what Gemini knows about me to suggest ways the material jives with my specific interests. After watching a long video it is very much worth my time getting a summary and comments that also pull in other references.

Sorry for the noisy reply here, but I am saying to use Google properties mindfully, balancing pros and cons, and just use the parts that are useful and only open up sharing private information when you get something tangible for it.

stevenalowe 5 days ago [-]
Every YT short looks AI-ified and creepy now
nilslindemann 4 days ago [-]
It doesn't matter, YouTube is anyway unwatchable since everything gets autotranslated to hilarious garbage.
JumpCrisscross 5 days ago [-]
FYI, I used to pay for YouTube Premium and have since stopped doing that. Deleting the app and letting ad blockers filter out this nonsense is a superior experience.

Strongly recommend. We’ll get local AIs that can skip the cruft soon enough anyway.

alex1138 5 days ago [-]
Someone mentioned Insta is doing this too in this comment section

I just completely despair. What the fuck happened to the internet? Absolutely none of these CEOs give a shit. People need to face real punishments

karakot 5 days ago [-]
Slopification. On a plus side, I'm more offline later.
ChrisArchitect 5 days ago [-]
Being driven mad by conspiracy paranoia about 'face filters' (possible compression artifacts) is a great example of being AI-pilled.

And then the discourse is so riddled with misnomers and baited outrage that it goes nowhere.

The other example in submitted post isn't 'edits to videos' but rather the text descriptions of automated captions. The Gemini/AI engine not being very good at summarizing is a different issue.

nottorp 4 days ago [-]
It's fine, I can't watch youtube anyway with ublock origin on any more :)
tim333 4 days ago [-]
I use it with Origin Lite. Usually works. SponsorBlock is quite good also.
nottorp 4 days ago [-]
That’s probably not Firefox?
tim333 4 days ago [-]
Chrome
nottorp 4 days ago [-]
What's the point of ad blockers when you're using a browser that sends everything to google anyway?
tim333 4 days ago [-]
The latter doesn't bother me. I use gmail etc. But get annoyed by ads.
saint_yossarian 4 days ago [-]
Works perfectly fine here on Firefox, no Premium.
nottorp 3 days ago [-]
Last time i tried it loaded the main UI but the video never showed up.
muppetman 5 days ago [-]
They're heating the garbage slightly before serving it? Oh no.
SilverElfin 5 days ago [-]
I’ve also noticed YouTube has unbanned many channels that were previously banned for overt supremacist and racist content. They get amplified a lot more now. Between that and AI slop, I feel like Google is speed running the changes X made over the last few years.
greenchair 4 days ago [-]
Agreed, encouraging diversity of ideas and free expression is great!
rc_mob 4 days ago [-]
What is this racist ass comment disguised as a "free speech" comment
justinclift 5 days ago [-]
> Samuel Woolley, a disinformation expert at the University of Pittsburgh, said the company’s wording was misleading. “Machine learning is a subset of artificial intelligence,” he said. “This is AI.”

It's the other way around isn't it? "AI" is a subset of ML.

Ylpertnodi 4 days ago [-]
Trust issues with someone who is "a disinformation expert" - at a university, no less?
tantalor 5 days ago [-]
This story is several months old
choilive 5 days ago [-]
What PM thought this was a good idea? This has to be the result of some braindead we need more AI in the product mandate
koolba 5 days ago [-]
What’s the point of doing this?

I don't understand the justification for the expense or complexity.

coliveira 5 days ago [-]
Every engineer on Google is now measured on how much AI they use on their products. This is the predictable result.
5 days ago [-]
Aurornis 5 days ago [-]
This link is to a Mastodon thread which links to another blog post which links to an actual source on ynetnews.com which quotes another article that has a quote from a YouTube rep. Save yourself the trouble and go straight to that article (although it's not great either): https://www.ynetnews.com/tech-and-digital/article/bj1qbwcklg

The key section:

> Rene Ritchie, YouTube’s creator liaison, acknowledged in a post on X that the company was running “a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”

So the "AI edits" are just a compression algorithm that is not that great.

kridsdale1 5 days ago [-]
THAT Rene Ritchie? Cool, I wondered what happened to him. I listened to his podcast all the time in the 2000s, when podcasts were synced to an iPod over USB before your commute.
tomhow 5 days ago [-]
We updated the link, thanks!
filleduchaos 5 days ago [-]
"Clarify, reduce noise, and improve overall video clarity" is not "just a compression algorithm", what? Words have meanings.
seanmcdirmid 5 days ago [-]
“a small experiment on select Shorts, using traditional machine learning to clarify, reduce noise and improve overall video clarity—similar to what modern smartphones do when shooting video.”

It looks like quality cleanup, but I can’t imagine many creators aren’t using decent camera tech and editing software for shorts.

filleduchaos 5 days ago [-]
Well yes, that's what I mean, quality cleanup is not what I'd call a compression algorithm.

And as you say, arbitrarily applying quality cleanup is making assumptions of the quality and creative intent of the submitted videos. It would be one thing if creators were uploading raw camera frames to YouTube (which is what smartphone camera apps are receiving as input when shooting video), but applying that to videos that have already been edited/processed and vetted for release is stepping over a line to me. At the very least it should be opt-in (ideally with creators having the ability to preview the output before accepting to publish it).

Borealid 5 days ago [-]
Noise is, because of its random nature, inherently less compressible than a predictable signal.

So counterintuitively, noise reduction improves compression ratios. In fact many video codecs are about determining which portion of the video IS noise that can be discarded, and which bits are visually important...

filleduchaos 5 days ago [-]
That doesn't make it just a compression algorithm, to me at least.

Or to put it another way, to me it would be similarly disingenuous to describe e.g. dead code elimination or vector path simplification as "just a compression algorithm" because the resultant output is smaller than it would be without. I think part of what has my hackles raised is that it claims to improve video clarity, not to optimise for size. IMO compression algorithms do not and should not make such claims; if an algorithm has the aim (even if secondary) to affect subjective quality, then it has a transformative aspect that requires both disclosure and consent IMO.

Aurornis 5 days ago [-]
> That doesn't make it just a compression algorithm, to me at least

It's in the loop of the compression and decompression algorithm.

Video compression has used tricks like this for years. For example, reducing noise before decode and then adding it back in after the decode cycle. Visual noise doesn't need to be precise, so it removing it before compression and then approximating it on the other end saves a lot of bits.

Borealid 5 days ago [-]
Perhaps it would raise your hackles less if you read the Youtube comment as "improve video clarity at a particular file size", rather than how you presumably read it as "improve video clarity [with no regard for how big the resulting file is]".

I think the first comment is why they would position noise reduction as being both part of their compression and a way to improve video clarity.

Forgeties79 5 days ago [-]
h
somnic 5 days ago [-]
YouTube is not hosting and serving uncompressed video so the apt comparison is not "compression" to "no compression" rather than "fancy experimental compression" to "tried and true compression."
honkostani 5 days ago [-]
[dead]
honkostani 5 days ago [-]
[dead]
throwawayk7h 5 days ago [-]
> "Machine learning is a subset of artificial intelligence,” he said.

No, gen AI is a subset of machine learning.

eschaton 5 days ago [-]
AI is the field. Machine learning is one of many specializations within the field. “Generative AI” is the colloquial term for using various machine learning models to generate text, images, video, code, etc.; that is, it’s a subfield of machine learning.

Other subfields of AI include things like search, speech and language understanding, knowledge representation, and so on. There’s a lot more to AI than machine learning and a lot more to machine learning than LLMs (“gen AI”).

MaxL93 5 days ago [-]
"Making AI edits to videos" strikes me as as bit of an exaggeration; it might lead you to think they're actually editing videos rather than simply... post-processing them[1].

That being said, I don't believe they should be doing anything like this without the creator's explicit consent. I do personally think there's probably a good use case for machine learning / neural network tech applied to the clean up of low-quality sources (for better transcoding that doesn't accumulate errors & therefore wastes bitrate), in the same way that RTX Video Super Resolution can do some impressive deblocking & upscaling magic[2] on Windows. But clearly they are completely missing the mark with whatever experiment they were running there.

[1] https://www.ynetnews.com/tech-and-digital/article/bj1qbwcklg

[2] compare https://i.imgur.com/U6vzssS.png & https://i.imgur.com/x63o8WQ.jpeg (upscaled 360p)

ssl-3 5 days ago [-]
Please allow me "post-process" your comment a bit. Let me know if I'm doing this right.

> "Making AI edits to videos" strikes me as something particularly egregious; it leads a viewer to see a reality that never existed, and that the creator never intended.

randycupertino 5 days ago [-]
It's not post-processing, they are applying actual filters, here is an example they make his eyes and lips bigger: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...
MaxL93 5 days ago [-]
Sure, but that's not YouTube. That's Instagram. He says so at 1:30.

YouTube is not applying any "face filters" or anything of the sort. They did however experiment with AI upscaling the entire image which is giving the classic "bad upscale" smeary look.

Like I said, I think that's still bad and they should have never done it without the clear explicit consent of the creator. But that is, IMO, very different and considerably less bad than changing someone's face specifically.

randycupertino 5 days ago [-]
His followers also added screenshots of youtube shorts doing it. He says he reached out to both platforms and says he will be reporting back with an update from their customer service and is doing some compare an contrast testing for his audience.

Here's some other creators also talking about it happening in youtube shorts: https://www.reddit.com/r/BeautyGuruChatter/comments/1notyzo/...

another example: https://www.youtube.com/watch?v=tjnQ-s7LW-g

https://www.reddit.com/r/youtube/comments/1mw0tuz/youtube_is...

https://www.bbc.com/future/article/20250822-youtube-is-using...

MaxL93 5 days ago [-]
> Here's some other creators also talking about it happening in youtube shorts (...)

If you open the context of the comment, they are specifically talking about the bad, entire-image upscaling that gives the entire picture the oily smeary look. NOT face filters.

EDIT : same thing with the two other links you edited into your comment while I was typing my reply.

Again, I'm not defending YouTube for this. But I also don't think they should be accused of doing something they're not doing. Face filters without consent are a far, far worse offense than bad upscaling.

I would like to urge you to be more cautious, and to actually read what you brandish as proof.

watwut 4 days ago [-]
If the upscaling ends up producing bigger eyes and lips ... then it is a face filter.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 18:31:41 GMT+0000 (Coordinated Universal Time) with Vercel.