NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Our approach to age prediction (openai.com)
BatteryMountain 7 hours ago [-]
Do not give your biometric & photos of ID's or videos of your face to these companies. Nether to third-parties. The potential failure modes here are very high risk and not worth it. Better to unsubscribe and let them know why.
bojan 4 hours ago [-]
Often you can't let them now. Recent experience of me canceling yet another American service due to the latest war mongering showed me that.

I did get offered a discount in the cancelation flow, but nowhere could I have given a custom reason of my cancelation. They'll never know.

terminalshort 5 hours ago [-]
What is the risk? Anyone who wants a picture of my face can already get one by googling my name and going to my linkedin profile.
tormeh 5 hours ago [-]
For phots of ID this is obvious: A data leak, followed by impersonation ("identity theft") and unwelcome invoices and/or empty bank accounts.
jacquesm 4 hours ago [-]
Depending what they get about you the risks range from impersonation all the way to deepfakes.
4 hours ago [-]
jacquesm 4 hours ago [-]
The real problem is service providers that you are somehow forced to use that will in turn use AI for various data extraction. They are effectively gatewaying your data to the AI companies and not all of them are sufficiently transparent about this. Mobile phone companies, rental agencies and various other service providers in turn are part of the funnel.
TimByte 2 hours ago [-]
Yep, there's a real trust gap when it comes to handing over biometric data
astura 5 hours ago [-]
My face is not private information and probably hundreds of other people's cameras capture pictures/videos of me/my face every day.

I hate age verification as a concept and I wouldn't personally go through it to use chatgpt, but "failure modes here are very high risk" is unnecessarily alarmist.

Jonovono 15 hours ago [-]
For some reason ChatGPT has suddenly started thinking i'm a teen. Every answer it starts out "Since you are a teen I will..." and prompts me to upload an ID to show my age. I'm 35.
jayelem 13 hours ago [-]
OpenAI demanded I prove my age in November 2025. I’m an educated 50 year old and had been paying for the service for over a year. When they insisted I prove my age I went through two layers of support but got nowhere. They insisted I go through their verification process. I refused and cancelled my subscription. This may be a losing battle but I’m not going to upload a photo to these services.
csomar 3 hours ago [-]
I don't get it. They're doing everything they can to create roadblocks to adoption. They don't accept prepaid cards, they restrict certain models behind extended verification processes and the list goes on. They got a lucky head-start and seem to have assumed they've built some impenetrable moat.
samename 14 hours ago [-]
Does OpenAI have an incentive to get age prediction "wrong" so that more people "verify" their ages by uploading an ID or scanning their face, allowing "OpenAI" to collect more demographic data just in time to enable ads?
BatteryMountain 7 hours ago [-]
YES. They all do. Everyone is dripping to get their hands on your biometric data and medical info.
xyzzy123 5 hours ago [-]
I have worked in this space, and my experience was that usually age / identity verification is driven by regulatory or fraud requirements. Usually externally imposed.

Product managers hate this, they want _minimum_ clicks for onboarding and to get value, any benefit or value that could be derived from the data is miniscule compared to the detrimental effect on signups or retention when this stuff is put in place. It's also surprisingly expensive per verification and wastes a lot of development and support bandwidth. Unless you successfully outsource the risk you end up with additional audit and security requirements due to handling radioactive data. The whole thing is usually an unwanted tarpit.

embedding-shape 4 hours ago [-]
> Product managers hate this

Depends on what product they manage, at least if they're good at their job. A product manager for social media company know it's not just about "least clicks to X", but about a lot of other things along the way.

Surely the product managers at OpenAI are briefed on the potential upsides with having the concrete ID for all users.

xyzzy123 4 hours ago [-]
Making someone produce an identity document or turn on their camera for a selfie absolutely tanks your funnel. It's dire.

The effect is strong enough that a service which doesn't require that will outcompete a service which does. Which leads to nobody doing it in competitive industries unless a regulator forces it for everybody.

Companies that must verify will resort to every possible dark pattern to try to get you over this massive "hump" in their funnel; making you do all the other signup before demanding the docs, promising you free stuff or credit on successful completion of signup, etc. There is a lot of alpha in being able to figure out ways to defer it, reduce the impact or make the process simpler.

There is usually a fair bit of ceremony and regulation of how verification data is used and audits around what happens to it are always a possibility. Sensible companies keep idv data segregated from product data.

embedding-shape 3 hours ago [-]
> Making someone produce an identity document or turn on their camera for a selfie absolutely tanks your funnel. It's dire.

Yes, but again, a good product manager wouldn't just eyeball the success percentage of a specific funnel and call it a day.

If your platform makes money by subtle including hints to what products to prefer, and forcing people to upload IDs as a part of the signup process, and you have the benefit of being the current market leader, then it might make sense for the company to actually make that sacrifice.

ralfd 3 hours ago [-]
But the comments here are the proof for xyzzy123 claim:

No one wants to upload an ID and instead is moving to a competitor!

To still suspect that this must be an evil genius plan by OpenAI doesn't make sense.

embedding-shape 2 hours ago [-]
> No one wants to upload an ID and instead is moving to a competitor!

Comments on the internet is rarely proof of anything, even so here.

If no one wants to upload an ID, we'd see ChatGPT closing in a couple of weeks, or they'll remove the ID verification. Personally, I don't see either of those happening, but lets wait and see if you're right or not. Email in the profile if you want to later brag about being right, I'll be happy to be corrected then :)

duskdozer 2 hours ago [-]
The average HN user maybe, but elsewhere, I see people uploading their IDs without a second thought. Especially those in the "chromebooks and google docs in school" generation who've been conditioned against personal data privacy their whole lives
jacquesm 4 hours ago [-]
There is no way that the likes of OpenAI can make a credible case for this. What fraud angle would there be? If they were a bank then I can see the point.
xyzzy123 4 hours ago [-]
Regulatory risk around child safety. DSA article 28 and stuff like that. Age prediction is actually the "soft" version; i.e, try not to bother most users with verification, but do enough to reasonably claim you meet requirements. They also get to control the parameters around how sensitive it is in response to the political / regulatory environment.
jacquesm 4 hours ago [-]
Absolutely. Profile building.
Hoasi 7 hours ago [-]
Maybe they should start scanning users' irises.
renewiltord 13 hours ago [-]
Oh yes. In fact, I read on Reddit that they have secret project to use Worldcoin.
aargh_aargh 7 hours ago [-]
> simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service

The normalization of identity verification to use internet services is itself a problem. It's described much better than I could by EFF here:

https://www.eff.org/deeplinks/2026/01/so-youve-hit-age-gate-...

owisd 6 hours ago [-]
The EFF are fighting a losing battle:

> we hope we’ll win in getting existing ones overturned and new ones prevented.

All the momentum is in the other direction and not slowing down. There are valid privacy concerns, but, buried in this very article, the EFF admit that it’s possible to do age-gating in a privacy-preserving way:

> it’s possible to only reveal your age information when you use a digital ID. If you’re given that choice, it can be a good privacy-preserving option

If they want to take a realistic approach to age-gating they should be campaigning to make this approach only option.

TimByte 2 hours ago [-]
And once a system starts making probabilistic guesses about who you are, the burden flips onto the user to disprove it
sunrunner 14 hours ago [-]
"How do you do, fellow kids? Err...skibidi toilet?" That should work, it's at least three years old now I think?
10 hours ago [-]
loloquwowndueo 14 hours ago [-]
Forever young, my dude. Forever young.
Jonovono 12 hours ago [-]
Haha ya I asked my my younger brother if he’s gotten it and he said he didn’t. I’m like alright I must give off youthful vibes ;)
aaronblohowiak 11 hours ago [-]
sigma rizzler, no cap.
Retr0id 19 hours ago [-]
Everyone's saying this is for advertising, but I don't think it is. It's so they can let ChatGPT sext with adults.
rsync 10 hours ago [-]
It’s worse than that.

They want to build “the scream room” from Frank Herbert’s _The Jesus Incident_.

There is immense power to wield over someone when you know what they did in the scream room.

BatteryMountain 7 hours ago [-]
Mmm juicy kompromat
dragonwriter 15 hours ago [-]
Its for both; the loosened guardrails around sex and the advertising roll-out are both explicitly for logged-in adults, and the age prediction is how they determine the logged-in user is an “adult”.
KaiserPro 5 hours ago [-]
Given that they have terrible data isolation, I would suspect that the biometrics will be used to identify people when they eventually release the alexa/google glasses system they are trying to work on.

That kind of context is super useful for making stored data relevant, and also selling shit to you.

samename 19 hours ago [-]
Do you expect the data collected for age verification will be completely separate from the advertising apparatus? I would expect the incentives would align for this to enhance their advertising options.
Retr0id 18 hours ago [-]
I'll almost certainly get used for both, but I believe "adult content" is the primary motivator. If it was just for ads they wouldn't even bother announcing it as a "feature", they'd just do it.

Also:

"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."

Jensson 18 hours ago [-]
> If it was just for ads they wouldn't even bother announcing it as a "feature", they'd just do it.

Its a feature for advertisers, and investors also wanna know. Did you think you are the customer?

Retr0id 18 hours ago [-]
"you can target ads by demographic" is table stakes
datsci_est_2015 7 hours ago [-]
Does this factoid contradict their motivation in publishing this information?
crimsoneer 5 hours ago [-]
It would be a pretty massive GDPR breach if it wasn't, wouldn't it? All the biometrics is "special category" data which you can't play face and loose with.
duskdozer 2 hours ago [-]
Depends on how much it earns vs how much it costs in fines
tomasphan 18 hours ago [-]
perihelions 15 hours ago [-]
Discussed here at the time,

https://news.ycombinator.com/item?id=45604313 ("Chat-GPT becomes Sex-GPT for verified adults (twitter.com/sama)")

fakedang 18 hours ago [-]
Exactly. Advertising will drive away users unless they are slowly trained into it, which will take time, time that Open AI doesn't have.

Meanwhile, the adult market is huge and sureshot revenue from a user base that is more likely to not mind the ads.

sigmar 19 hours ago [-]
>behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.

Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.

Imustaskforhelp 7 hours ago [-]
I also saw a reddit post once which showed that even if you type something and don't press enter, even that's logged in chatgpt

SO I am pretty sure that they might be using that information as well. I don't see any reason for them not to.

So if you wrote something to chatgpt and then removed it/ didn't ask it? Yea they might be using that history too.

embedding-shape 4 hours ago [-]
Last time I checked, most invasive analytics platforms do this by default as soon as you integrate their libraries. Product managers are very hype-driven, and usually the reason stuff like that gets integrated in the first place.

I think it's more common than not for the large platforms, to try to log everything that is happening + log stuff that isn't even happening.

Imustaskforhelp 2 hours ago [-]
I don't really know but I don't think most people know it.

I have had passwords accidentally be pasted into chatgpt if I were using my bitwarden password manager sometimes and then had them be removed and I thought I was okay

It is scary that I am pretty familiar with tech and I knew it was possible but I thought that for privacy they wouldn't. I feel like the general public might be even more oblivious.

Also a quick question but how long are the logs kept in OpenAI? And are the logs still taken even if you are in private mode?

embedding-shape 2 hours ago [-]
> I don't really know but I don't think most people know it.

That's for sure, most people don't know how much they're being tracked, even if we consider only inside the platform. Nowadays, lots of platforms literally log your mouse movements inside the page, so they can see exactly where you first landed, how you moved around on the page, where you navigated, how long you paused for, and much much more. Basically, if it can be logged and re-constructed, it will be.

> Also a quick question but how long are the logs kept in OpenAI? And are the logs still taken even if you are in private mode?

As far as I know right now, OpenAI is under legal obligation to log all of their ChatGPT chats, regardless of their own policies, but this was a while ago (this summer sometime?), maybe it's different today.

What exactly you mean with "private mode"? If you mean "incognito/private window" in your browser, it has basically no impact on how much is logged by the platforms themselves, it's all about your local history.

For the "temporary mode" in ChatGPT, I also think it has no impact on how much they log, it's just about not making that particular chat visible in your chat history, and them not using that data for training their model. Besides that, all the tracking in your browser still works the same way, AFAIK.

Imustaskforhelp 44 minutes ago [-]
Wow thanks for your response man.

I was referring to temporary mode when I was saying (but I also considered private window to be much safe as well but wow looks like they log literally everything)

So out of all providers, gemini,claude,openAI,grok and others? Do they all log everything permanently?

If they are logging everything, what prevents their logs from getting leaked or "accidentally" being used in training data?

> As far as I know right now, OpenAI is under legal obligation to log all of their ChatGPT chats, regardless of their own policies, but this was a while ago (this summer sometime?), maybe it's different today.

I also remember this post and from the current political environment, that's kind of crazy.

Also some of these services require a phone number one way or other and most likely there is a way the phone number can somehow be linked to logs, then since phone numbers are released by govt., usually chances are that if threat actors want data on large & OpenAI contributes to them, a very good profile of a person can be built if they use such services... Wild.

So if OpenAI"s under legal obligation, is there a limit for how long to keep the logs or are they gonna keep it permanently? I am gonna look for the old article from HN right now but if the answer is permanently, then its even more dystopian than I imagined.

The mouse sharing ability is wild too. I might use librewolf at this point to prevent some of such tracking

Also what are your thoughts on the new anonymous providers like confer.to (by signal creator), venice.ai etc.? (maybe some openrouter providers?)

embedding-shape 22 minutes ago [-]
You can safely assume (and probably better you do regardless) that everyone on the internet is logging and slurping up as much data as they can about their users. Their product teams usually is the one who is using the data, but depending on the amount of controls in the company, could be that most of it sits in a database both engineering, marketing and product team has access to.

> If they are logging everything, what prevents their logs from getting leaked or "accidentally" being used in training data?

The "tracking data" is different from "chat data", the tracking data is usually collected for the product team to make decisions with, and automatically collected in the frontend and backend based on various methods.

The "chat data" is something that they'd keep more secret and guarded typically, probably random engineers won't be able to just access this data, although seniors in the infrastructure team typically would be able to.

As for easy or not that data could slip into training data, I'm not sure, but I'd expect just the fear of big name's suing them could be enough for them to be really careful with it. I guess that's my hope at least.

I don't know any specific "how long they keep logs" or anything like that, but what I do know, is that typically you try to sit on your data for as long as you can, because you always end up finding new uses for it in the future. Maybe you wanna compare how users used the platform in 2022 vs 2033, and then you'd be glad, so unless the company has some explicit public policy about it, assume they sit on it "forever".

> Also what are your thoughts on the new anonymous providers like confer.to (by signal creator), venice.ai etc.? (maybe some openrouter providers?)

Haven't heard about any of them :/ This summer I took it one step further and got myself the beefiest GPU I could reasonably get (for unrelated purposes) and started using local models for everything I do with LLMs.

jacquesm 4 hours ago [-]
Same goes for the google search box and many others like it. Every keystroke gets sent.
zardo 19 hours ago [-]
Chat history would be a good signal to predict age until you give kids a reason to try to confound it.
sigmar 18 hours ago [-]
I, for one, would love to see the gen alphas tiktoking about what 401k questions to type into chatgpt
jedberg 15 hours ago [-]
Anyone remember the game Leisure Suit Larry? To get the full 18+ experience, you had to answer five trivia questions that only adults should know. But it turns out smart teens who like trivia knew most of them too (and you could just ask mom and dad, they had no clue why you were asking which President appeared on Laugh In).
duskwuff 9 hours ago [-]
Also, hilariously, a lot of those questions require a trip to Wikipedia (or a game guide) today. A lot of them reference bits of 1960s/1970s pop culture which are no longer common knowledge.

https://allowe.com/games/larry/tips-manuals/lsl1-age-quiz.ht...

lordgrenville 6 hours ago [-]
This is fun, I asked AI to come up with some modern ones to check someone is over 30. Zune, Friends, early memes, Who Wants to be a Millionaire, etc https://chat.deepseek.com/share/v9d5ckb8gv9rahwetq
astura 5 hours ago [-]
>"Dental plan! Lisa needs braces!" is a workplace chant from...

OMG, That's absolutely unhinged to describe something that takes place entirely in Homer's head as a "workplace chant."

thaumasiotes 9 hours ago [-]
A lot of them don't. Some appear to be specialized for children:

> Peter Piper picked pickled (peppers)

> How many molecules are there in a glass of water? (as many as there are)

There's also this one:

> Which is not a city in Mexico? (San Diego)

which appears to have been false at the time, and is still false now.

esafak 9 hours ago [-]
Spiro Agnew is a form of social disease, a jazz-fusion rock band, a former Vice President, the first woman in Congress?
engineer_22 19 hours ago [-]
Even more adults would be flagged as children.
JohnMakin 19 hours ago [-]
Hard no. It's so easy to get "flagged" by opaque systems for "Age verification" processes or account lockouts that require giving far too much PII to a company like this for my liking.

> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.

Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.

samename 19 hours ago [-]
The unfortunate reality is this isn't just corporations acting against user's interests, governments around the world are pushing for these surveillance systems as well. It's all about centralizing power and control.
miki123211 8 hours ago [-]
Don't forget the journalists.

Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.

"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.

Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.

[1] https://x.com/KelseyTuoc/status/1588231892792328192?lang=en

BowBun 15 hours ago [-]
It cracks me up that Persona is the vendor OpenAI will use to do manual verifications (as someone who works on integrations with Persona).

I'm glad ChatGPT will get a taste of VC startup tech quality ;)

pixl97 19 hours ago [-]
Yep. Whenever platforms opt for more data I opt out. And like clockwork they let loose all that PII to hackers within months.
seneca 19 hours ago [-]
Yeah, this is all far far too invasive. The goal is obviously to gather as much data on you as possible under whatever pretense users are most likely to accept. "Think of the children", as always. This will then be used to sell advertising to you, or outright sell it to data brokers.

New boss, same as the old boss.

Barathkanna 6 hours ago [-]
I get why this exists and appreciate the transparency, but it still feels like a slippery middle ground. Age prediction avoids hard ID checks, which is good for privacy, yet it also normalizes behavioral inference about users that can be wrong in subtle ways. I’m supportive of the safety goal, but long term I’m more comfortable with systems that rely on explicit user choice and clear guardrails rather than probabilistic profiling, even if that’s messier to implement
TimByte 2 hours ago [-]
The selfie re-verification escape hatch helps, but it also quietly normalizes "prove you're an adult to get full functionality," which is a pretty big shift from how most software has worked historically
raincole 19 hours ago [-]
Pretty cool. Evidence that you can do whatever you want under the banner of 'protecting the kids.'
bilekas 18 hours ago [-]
Protecting the kids and fighting terror. Anything that can't be argued against is always used as a justification of people in power who don't want to incite a riot.

Politicians, CEOs, Lawyers it's standard practice because it's so effective.

Traubenfuchs 5 hours ago [-]
The holy trinity is complete with 3., "money laundering (prevention/detection)".
4 hours ago [-]
lpcvoid 19 hours ago [-]
>young people deserve technology that both expands opportunity and protects their well-being.

Then maybe OpenAI should just close shop, since (SaaS) LLMs do neither in the mid to long term.

qoez 19 hours ago [-]
Does regulators really care about a predicted age? I feel like they require hard proof of being above age to show explicit content. The only ones that care about predicted age is advertisers.
Imustaskforhelp 19 hours ago [-]
It's not much for the regulators as much as its for the advertisers.

At this point, just use gemini (yes its google and has its issues if you need SOTA) or I have recently been trying out more and more chat.z.ai for simple text issues (like hey can you fix this docker issue etc.) and I feel like chat.z.ai is pretty good plus open source models (honestly chat.z.ai feels pretty SOTA to me)

threetonesun 19 hours ago [-]
Kagi's Assistant is the most useful tool I've found as far as searching goes, and occasionally simple codegen. Let's you use a wide variety of models and isn't tracking me.
Imustaskforhelp 18 hours ago [-]
If we are talking about complete privacy. I am trying out https://confer.to (created by signal team) too and I am unable to run it on my mac (passkey support)(I tried it both in zen & orion) but I tried it on an android chrome (tablet) and I am kinda more optimistic about it too

I have heard good things about Kagi in fact, that's the reason why I tried orion and still have it in the first place but I haven't bought Kagi, I just used the free searches orion gives & I don't know if it has Kagi's assistant.

I think proton's Lumo is another good bet.

If you want something to not track you, I once asked cerebras team on their discord if they track the queries and responses from their website try now feature and they said that they don't. I don't really see a reason why they might lie about it given that its only meant for very basic purposes & they don't train models or anything.

You also get one of the fastest inferences for models including GLM 4.7 (which z.ai uses)

You might not get search results though but for search related queries duck.ai's pretty good and you can always mix and match.

But Cerebras recently got a 10 billion $ investment from OpenAI and I Have been critical of them from now on so do be wary now.

Kagi Assistant does seem to be good if someone already uses Kagi or has a subscription of it from what I feel like.

Retr0id 19 hours ago [-]
In the UK, age verification just has to be "robust" (not further specified). Face scanning, for example, is explicitly designated as an allowed heuristic.
heliumtera 19 hours ago [-]
Any proof that you are above a certain age will also expose you identity. That is the only reason regulators care about children safety online, because they care about ID. LLMs are very good at profiling users in hacker news and finding alt accounts, for example. profiling is the best use case for llms.

So there you go, maybe it wont give exactly what regulators say they want, but it will give exactly what they truly want.

al_borland 19 hours ago [-]
> Viral challenges that could encourage risky or harmful behavior in minors

Why would it encourage this for anyone?

bandrami 10 hours ago [-]
I was depressed but not surprised at how easy it turned out to be to get people to take horse-dewormer and wash it down with unpasteurized milk
gardnr 19 hours ago [-]
Hey Al, they might be implying that non-minors would be impervious to the viral challenges based on some sort of well-developed critical thinking facilities. I am not so optimistic.
al_borland 19 hours ago [-]
A lot of people are using AI as a trusted friend, because they don't really have anyone else. A good friend would, I hope, talk someone out of doing something dangerous just to get a silly viral video. With AI being trained on the internet, that's going to have a very different take on things, as the internet only cares about the spectacle, not the person performing it.
renewiltord 13 hours ago [-]
Agreed. We need to take away Internet access from psychologically susceptible people. Those with any mental illness should probably access the Internet only under supervision. They can request a URL and an online proctor can be automatically contacted who will view their screen and make sure that they are not viewing dangerous things.

It is truly not just children who need protection.

pibaker 12 hours ago [-]
Agreed, we must protect those diagnosed with sluggish schizophrenia[1] from the internet by sending them to off line vacation homes in Siberia. Can't risk them becoming disillusioned with our great motherland!

[1] https://en.wikipedia.org/wiki/Sluggish_schizophrenia

duskdozer 2 hours ago [-]
Puts the recent rhetorical pushes for "reopening the asylums" in another light
Aurornis 12 hours ago [-]
> We need to take away Internet access from psychologically susceptible people. Those with any mental illness should probably access the Internet only under supervision.

Great way to ensure nobody seeks mental health treatment.

pixl97 19 hours ago [-]
After watching politics over the past decade it seems people of all ages have no critical thinking skills.
gardnr 19 hours ago [-]
When we look at how fast and coordinated the rollout of age verification has been around the globe, it's hard not to wonder if there was some impetus behind it.

There are dark sides to the rollout that EFF details in their resource hub: https://www.eff.org/issues/age-verification

There is a confluence of surveillance capitalism and a global shift towards authoritarianism that makes it particularly alarming right now.

JohnMakin 19 hours ago [-]
Agreed - Interesting that these systems inevitably involve proving you're a citizen in some way, which seems unnecessary if your goal is to try to figure out someone's age.
19 hours ago [-]
nomilk 10 hours ago [-]
This could have the unintended consequence of encouraging under-agers to ask more 'adult' questions in order to try to trick it into thinking they're an adult. Analogous to the city that wanted to get rid of rats, so offered a bounty for every dead rat, and to the surprise of nobody except policy makers, the city ended up with more rats, not less. (lesson: they thought they were incentivising less rats, but unintentionally incentivised more)

The padding in OpenAI's statement is easy to see through:

> The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.

(the only real signal here is 'usage patterns' - AKA the content of conversations - the other variables are obfuscation to soften the idea that OpenAI will be pouring over users' private conversations to figure out if they're over/under age.).

Worth also noting 'neutered' AI models tend to be less useful. Example: Older Stable Diffusion models were preferred over newer, neutered models: https://www.youtube.com/watch?v=oFtjKbXKqbg&t=1h16m

Imustaskforhelp 7 hours ago [-]
I am an actual minor and I think I used to watch enough mature content (ie some finance,geopolitics & tech) from youtube where they may have thought I was greater than 18

But when youtube rolled out, I saw this video on taxes simply for tricking the youtube algorithm which had like A LOT of views.

I went to the comments and much of them were teenagers bashing the yt idea and commenting in jest about how yes they got helped in their taxes etc.

I simply don't see how openAI would be any different.

Youtube is still a one in a million though, Nothing else like that exists but there are many chat providers like OpenAI which are actually pretty good nowadays & don't want your id.

nubg 19 hours ago [-]
They're trying to make ChatGPT more attractive to advertisers.
big_toast 19 hours ago [-]
I hope Anthropic or someone pulls an Apple and has the taste to say no to ads. Maybe it will just be Apple. (Even their resistance is slowly fading..)

We don't need to jam ads into every orifice.

I hope there's more value to be had not doing ads than there is to include them. I'd cancel my codex/chatgpt/claude if someone planted the flag.

OpenAI seems to think it has infinite trust to burn.

raw_anon_1111 14 hours ago [-]
If you don’t want ads, pay for ChatGPT. If you aren’t making them money via ads or subscriptions, why should they care about you?
adrianwaj 13 hours ago [-]
I think they really should charge with micropayments, and they could even roll out their own currency for that if need be. Ads suck.

Actually, all the AI companies together should choose a micropayment system to focus on. I know in fashion, I've seen what would seem like competing brands center around a common "pillar of influence."

Also, if (long-tail?) AI companies work together, they could install appliances and terminals around cities. The most immediate use case - transport timetables. It seems like a no-brainer the more I think about it. Especially good for tourists who don't speak the local language. Governments may end up wanting to do that anyway and could subsidize the cost. It really depends on how fixated people are to owning their own screen, versus using someone else's. Those city screens could end up billboards anyway - especially for local businesses. They could print for a fee too and third parties could pay to get their app listed. Also, it's worth considering the increase in wealth inequality and rising hardware costs for people to own and stream into their own device. So this could be like the Internet Cafe 2.0.

Incidentally, there's a recent thread about someone streaming HN to a cheap display: https://news.ycombinator.com/item?id=46699782 - why not have such displays around town? I guess one major problem is vandalism.

raw_anon_1111 12 hours ago [-]
People have been suggesting a micro payment system for the web for over a quarter century

https://www.w3.org/TR/1999/WD-Micropayment-Markup-19990825/

Why would you want to use a terminal for mass transit instead of your phone?

adrianwaj 11 hours ago [-]
I prefer dumb phones, and then prefer to not have to carry one 24/7. Device lock-in is a whole other discussion. Why can't phones be switched off anyhow in terms of telco signal? Yet their Wifi and Bluetooth can be. Weird. What are they doing in stealth?

Look how cheap x402 transactions are (ie almost free) https://gemini.google.com/share/cbf1adb1570c It's a new thing - have business models adapted accordingly?

duskdozer 1 hours ago [-]
"source" of an LLM chat

https://www.x402.org/ >AI agent sends HTTP request and receives 402: Payment Required

>AI agent pays instantly with stablecoins

Smells like a weird ad to me.

raw_anon_1111 11 hours ago [-]
You don’t think your preference for not using a smart phone makes a viable market do you - seeing global penetration of smartphones is 90% and even higher among those who can afford to travel?
adrianwaj 11 hours ago [-]
I think compulsory 2FA and the trend towards must-have downloadable phone apps is a problem, but that may not be fully evident... yet. In my experience, being tied to a phone and phone number is a problem. Also, when you carry your phone, your funds are at risk too from tech-jacking (as opposed to car-jacking) .. especially with crypto, right?

"I keep a cheap travel eSim plan active on it so that if I am somewhere sketchy I can leave my main phone at home." https://news.ycombinator.com/item?id=46639157

It's a personal choice - you are also tied to a battery charger. Wait, solar panels are getting better.

Why is it so difficult to run a mobile app on a PC? Why can't there be a device that I connect to my laptop to turn it into a phone (voice + texts) whenever the need arises? Weird. What's with the identification required at SIM point-of-sale? Is someone trying to track me or something?

raw_anon_1111 10 hours ago [-]
So now instead of carrying a phone around with you to make a call, you want to carry around a laptop?

Again, your use case is not a business model. What next bring back pay phones?

10 hours ago [-]
big_toast 12 hours ago [-]
Oh. I do pay since basically the day they offered it. It's not a matter that they should care.

No ads is a point of product differentiation. One among many. But in some sense ads are a natural resource curse that pervade the whole company. Again, I point to Apple vs Google/Meta.

WarmWash 13 hours ago [-]
People who haven't seen an ad or paid a subscription in 20 years are still trying to figure out why no one listens to their opinion on how to make the internet better.

I wouldn't hold my breath...

big_toast 11 hours ago [-]
Not sure if you're referring to me, or making a more general comment.

I'd listen to someone who has managed to not see an ad or pay for a subscription in 20 years. Sounds pretty impressive.

Me, I've seen a lot of ads and happily paid for chatgpt plus, pro, and the api. Not that I think that privileges my opinion.

fakedang 18 hours ago [-]
You give too much credit to Apple in the AI age. Especially since they've already partnered with Google to power Siri with Gemini now.

Apple is a has-been. Anthropic is best positioned to take up the privacy mantle, and may even be forced to, given that most of their revenue is enterprise and B2B.

17 hours ago [-]
accrual 15 hours ago [-]
I agree with you and it's really sad that Tim couldn't take Apple in the direction of user safety over profit.

I hope there's some layer between Apple and Gemini but only those at the helm can be trusted to make that happen and I don't trust them to choose users over the dollar.

panopticon 14 hours ago [-]
> I hope there's some layer between Apple and Gemini

In the press release Apple said they will be running this on their own hardware (both on-device and private cloud). They're not going to be directly routing requests to Gemini hosted by Google.

This obviously doesn't preclude some kind of data sharing arrangement, but there is at least some indirection between the two.

9991 12 hours ago [-]
Apple makes tens of billions of dollars per year on ads. Your conception of Apple and who they are have diverged.
big_toast 11 hours ago [-]
Apple's ad revenue is ~5% of company revenue? It seems like the split could be a lot higher. They might have to give up something for that though.
samename 19 hours ago [-]
Exactly. The more data they can collect, the better.

Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.

thuruv 19 hours ago [-]
> We’re learning from the initial rollout and continuing to improve the accuracy of age prediction over time

> While this is an important milestone, our work to support teen safety is ongoing.

agreed, I guess we'll be seeing some pushbacks similar to Apple's CSAM but overall it's about getting a better demographics on their consumers for better advertising especially when you have a one-click actions combined with it. We'll be seeing handful of middleware plugins (like Honey) popping up, which I think the intended usecase for something like chat based apps

gridspy 17 hours ago [-]
Random reply: 20 days ago you asked for my ChatGPT custom instructions to be more skeptical. It is :

Use an encouraging tone. Adopt a skeptical, questioning approach. Call me on things which don't seem right. List possible assumptions I'm making if any.

idontwantthis 12 hours ago [-]
Whenever anything involving advertisers and AI comes up I wonder, are we supposed to believe both that they are on the cusp of creating God and that advertising revenue is a meaningful second goal?
Shellban 17 hours ago [-]
Considering that OpenAI is having trouble getting its models to avoid recommending suicide (something it probably does not want for ANY user), I rather doubt this age prediction is going to be that helpful for curbing the tool's behavior.
xgkickt 19 hours ago [-]
How long before a phrase is found that causes a predicted birthdate of 1970/01/01 ?
aleksandrm 19 hours ago [-]
It's nonsense and doesn't work. They have "age predicted" my account a couple of months back saying I'm under 18, while I'm a man in my 40s who uses ChatGPT for mostly work related stuff, and nothing that would indicate that it's someone under 18. So now they are asking for a government ID to prove it. Yeah, no thanks.
deltoidmaximus 2 hours ago [-]
The purpose isn't to actually predict age with any accuracy, it's really just a cover to get you to cough up PII. They'll slowly turn the need-verification knob so that everyone eventually has to do it.
samename 19 hours ago [-]
That's because it was never really about protecting the children.
pixl97 19 hours ago [-]
No more typing "how is babby formed" I guess.
aleksandrm 18 hours ago [-]
"Am I pregante"
throwaway132448 19 hours ago [-]
Creepy people doing creepy things.
twelvechairs 19 hours ago [-]
"it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
elzbardico 19 hours ago [-]
Looks like an elegant solution. And yes, demographics are useful for advertising.
zatkin 19 hours ago [-]
This title gave me a weird feeling as if they were going to predict my own age.
VTimofeenko 19 hours ago [-]
Maybe at some point they will graduate to being able to predict who I am from "Friends"
heliumtera 19 hours ago [-]
exactly, i`m genuinely impress by how few people here have figure this
rogerrogerr 19 hours ago [-]
That’s what they’re doing
mayhemducks 15 hours ago [-]
See it starts with gender, and if (user.gender === "Female") user.age = 29.

After that, the algorithm gets very complex and becomes a black box. I'm sure they spent billions training it.

sho_hn 19 hours ago [-]
I think this is good.

I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.

samename 19 hours ago [-]
Interesting. Do you believe OpenAI has earned user trust and will be good stewards of the enhanced data (biometric, demographic, etc) they are collecting?

To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.

sonofhans 19 hours ago [-]
This is a thoughtful response and deserves discussion. Yes, certainly, OpenAI might get your age wrong. Yes, certainly, they’re signaling to advertisers.

But consider OPs point — ChatGPT has become a safety-critical system. It is a tool capable of pushing a human towards terrible actions, and there are documented cases of it doing this.

In that context, what is the responsibility of OpenAI to keep their product away from the most vulnerable, and the most easily influenced? More than zero, I believe.

richwater 19 hours ago [-]
> It is a tool capable of pushing a human towards terrible actions

So is Catcher In The Rye and Birth of a Nation.

> the most vulnerable, and the most easily influenced

How exactly is age an indicator of vulnerability or subject-to-influence?

sonofhans 9 hours ago [-]
> So is Catcher In The Rye and Birth of a Nation.

No, those are books. Tools are different, particularly tools that talk back to you. Your analogy makes no sense.

> How exactly is age …

In my experience, 12-year-old humans are much easier to sway with pleasant-sounding bullshit than 24-year-old humans. Is your experience different?

seneca 19 hours ago [-]
> ChatGPT has become a safety-critical system.

It's really really not. "Safety-critical system" has a meaning, and a chat bot doesn't qualify. Treating the whole world as if it needs to be wrapped in bubble-wrap is extremely unhealthy and it generally just used as an excuse for creeping authoritarianism.

DarkNova6 19 hours ago [-]
"I'm sorry Dave, I'm afraid I can't do that"
sho_hn 19 hours ago [-]
I'm an engineer working on safety-critical systems and have to live with that responsibility every day.

When I read the chat logs of the first teenager who committed suicide with the help and encouragement of ChatGPT I was immediately thinking about ways that could be avoided that make sense in the product. I want companies like OpenAI to have the same reaction and try things. I'm just glad they are.

I'm also fully aware this is unpopular on HN and will get downvoted by people who disagree. Too many software devs without direct experience in safety-critical work (what would you do if you truly are responsible?), too few parents, too many who are just selfishly worried their AI might get "censored".

There are really good arguments against this stuff (e.g. the surveillance effects of identity checks, the efficacy of age verification, etc.) and plenty of nuance to implementations, but the whole censorship angle is lame.

footy 19 hours ago [-]
I am against companies doing age verification like this due to the surveillance effects, but I agree with you that the censorship angle is not a good one.

I suppose mainly because I don't think a non-minor committing suicide with ChatGPT's help and encouragement matters less than a minor doing so. I honestly thing the problem is the user interface for GPT being a chat. I think it has a psychological effect that you can talk to ChatGPT the same way you can talk to Emily from school. I don't think this is a solvable problem if OpenAI wants this to be their main product (and obviously they do).

Traubenfuchs 4 hours ago [-]
We shouldn't build our products and policies around one-off darwin-award level people like that teenager. It reduces the products quality and increases the burden on every user.

I wholeheartedly reject the fully sanitized "good vibes only" nanny world some people desire.

throwaway132448 19 hours ago [-]
Maybe we don’t all need saving from ourselves. Maybe we need to grow up and have some personal responsibility. As someone who is happy to do that, seeing personal freedom endlessly slashed in the name of safety is tiresome.

My feelings have absolutely nothing to do with censorship. That’s just an easy straw man for you to try and dismiss my point of view, because you’re scared of not feeling safe.

pixl97 19 hours ago [-]
Cool, I'd like you to make a commercial system you sell access to and ensure that it is unsafe. I'll represent the injured and we'll own all your corporate assets, and like will pierce the corporate veil due to your wonton behavior.
throwaway132448 19 hours ago [-]
What are you on about? Laws are a codification of social norms. I’m not suggesting anything outside of existing social norms. Quite the opposite, I’m suggesting we stop changing them.
pixl97 17 hours ago [-]
That we stop changing social norms? How do you propose that, by making a law?
throwaway132448 16 hours ago [-]
That’s literally what laws do. You continue to contribute nothing to the discussion.
pixl97 14 hours ago [-]
Conversely I'm arguing your "everyone should just" argument is meaningless for addressing social change and behavior, hence your discussion is adding nothing. Of course neither of us see our communication as meaningless so it's up to the other readers to decide the merits of our text.
3 hours ago [-]
sho_hn 19 hours ago [-]
I'm not under 18. I assume you aren't either.
throwaway132448 19 hours ago [-]
I was addressing your own (supposedly) safety-critical work, how you’ve used that to justify other work in the name of safety more broadly, and how you’ve placed yourself on a pedestal with your experience, to convince yourself that comments by others on the necessity of such safety are less qualified.
rainonmoon 11 hours ago [-]
A society which took psychological safety seriously would never have created ChatGPT in the first place. But of course seriously advocating for safety would cost one their toys, and for one unwilling to pay that cost, empowering the surveillance apparatus seems very reasonable and easily confused for safe. When one’s children or friends’ children can no longer enter an airport because some vibe-coded slop leaked their biometrics, we’ll see if that holds true.
Der_Einzige 15 hours ago [-]
Sorry, but for every chat log with one teenager who commited suicide due to AI, I'm sure you can find many more of people/teens with suicide thoughts or intent that are explicitly NOT doing it because of advice from AI systems.

I'm pretty sure AI has saved more lives than it has taken, and there's pretty strong arguments to say that someone whose thinking of committing suicide will likely be thinking about it with or without AI systems.

Yes, sometimes you really do "have to take one for the team" in regards to tragedy. Indeed, Charlie Kirk was literally talking about this the EXACT moment he took one for the team. It is a very good thing that this website is primarily not parents, as they cannot reason with a clear unbiased opinion. This is why we have dispassionate lawyers to try to find justice, and why we should have non parents primarily making policy involving systems like this.

Also, parents today are already going WAY to far with non consensual actions taken towards children. If you circumcised your male child, you have already done something very evil that might make them consider suicide later. Such actions are so normalized in the USA that not doing it will make you be seen as weird.

kmoser 10 hours ago [-]
The relatively arbitrary cutoff at 18 is also an indication that this is a blunt tool, intended to alleviate some low-lying fruit of potential misuse but which will clearly miss the larger mark since there will be plenty of false positives (not to mention false negatives).

Some kids are mature enough from day one to never need tech overlords to babysit them, while others will need to be hand-held through adulthood. (I've been online since I was 12, during the wild and wooly Usenet and BBS days, and was always smart enough not to give personal info to strangers; I also saw plenty of pornographic images [paper] from an even younger age and turned out just fine, thank you.)

Maybe instead of making guesses about people's ages, when ChatGPT detects potentially abusive behavior, it should walk the user through a series of questions to ensure the user knows and understands the risks.

seneca 19 hours ago [-]
It feels like OpenAI is moving into the extraction phase far too soon. They are making their product less appealing to end users with ads and aggressive user-data gathering (which is what this really is). Usually you have to be very secure in your position as a market segment owner before you start with the anti-consumer moves, but they are rapidly losing market share, and they have essentially no moat. Is the goal just to speed-run an IPO before they lose their position?
BiteCode_dev 19 hours ago [-]
The minority reports vibe is getting stronger by the minutes.
jampa 19 hours ago [-]
I imagine they're building this system with the goal of extracting user demographics (age, sex, income) from chat conversations to improve advertising monetization.

This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.

samename 19 hours ago [-]
Exactly, all under the guise of "protect the children" - a tried and true surveillance and control trope.
embedding-shape 18 hours ago [-]
For context though, people have been screaming lately at OpenAI and other AI companies about not doing enough to protect the children. Almost like there is no winning, and one should just make everything 18+ to actually make people happy.
nurumaik 15 hours ago [-]
What a coincidence: "protect the children" narrative got amplified right about when implementing profiling became needed for openai profits. Pure magic
b112 15 hours ago [-]
I get why you're questioning motives, I'm sure it's convenient for them at this time.

But age verification is all over the place. Entire countries (see Australia) have either passed laws, or have laws moving through legislative bodies.

Many platforms have voluntarily complied. I expect by 2030, there won't be a place on Earth where not just age verification, but identity is required to access online platforms. If it wasn't for all the massive attempts to subvert our democracies by state actors, and even political movements within democratic societies, it wouldn't be so pushed.

But with AI generated videos, chats, audio, images, I don't think anyone will be able to post anything on major platforms without their ID being verified. Not a chat, not an upload, nothing.

I think consumption will be age vetted, not ID vetted.

But any form of publishing, linked to ID. Posting on X. Anything.

I've fought for freedom on the Internet, grew up when IRC was a thing, knew more freedom on the net than most using it today. But when 95% of what is posted on the net, is placed there with the aim to harm? Harm our societies, our peoples?

Well, something's got to give.

Then conjoin that with the great mental harm that smart phones and social media do to youth, and.. well, anonymity on the net is over. Like I said at the start, likely by 2030.

(Note: having your ID known doesn't mean it's public. You can be registered, with ID, on X, on youtube, so the platform knows who you are. You can still be MrDude as an alias...)

rockskon 15 hours ago [-]
95% of what is posted on the Internet is placed with intent to harm?

What?

asdfaslkj353 13 hours ago [-]
If you consider Advertisement and News (with understanding that it is rarely unbiased) harmful, the 95% is not that far off the truth.
maest 12 hours ago [-]
Mandatory adblock for children is something I could support.
froggit 12 hours ago [-]
> Mandatory adblock for children is something I could support.

And adults.

samename 17 hours ago [-]
It doesn't make everyone happy though. I think it would be useful to examine who is asking OpenAI to protect the children, and why.
16 hours ago [-]
pluralmonad 15 hours ago [-]
"please raise my children for me. Somebody should..."
rendaw 11 hours ago [-]
Every "protect the children" measure that involves increased surveillance is balanced by an equal and opposing "now the criminal pedophiles in positions of power have more information on targets".
peddling-brink 18 hours ago [-]
And also to reduce account sharing. How will a family share an account when they simultaneously make the adult account more “adult”, and make the kids account more annoying for adults.

Lots of upsides for them.

torginus 14 hours ago [-]
Tbf, they already had all the data they could use as they liked as per their EULA, I don't think they need a cover story.
kube-system 19 hours ago [-]
I think they're quite explicitly saying they already can determine your demographics from your chats. Which is almost certainly true for most users.
intrasight 14 hours ago [-]
I don't chat - I prompt. And usually zero-shot in an incognito window. I doubt it'll determine much of anything about my demographics.
kace91 14 hours ago [-]
>I don't chat - I prompt. And usually zero-shot in an incognito window

Likely demographic: Male, 35-65

Potential interests: technology, privacy focused products

Ideal ad style: avoid emotional hooks, list product features in an objective-seeming manner.

Dilettante_ 14 hours ago [-]
I'm sure it can figure out you don't like bean soup
notatoad 11 hours ago [-]
they definitely can do that already, and chatGPT will happily tell you all the demographics it thinks it knows about you if you ask it.
CGMthrowaway 15 hours ago [-]
Chatgpt has ads now?
heliumtera 19 hours ago [-]
safe to say everything in existence is created to extract user demographics. it was never about you finding information, this trillion market exists to find you. it was never about you summarizing information getting around advertisement, it is about profiling you.

this trillion market is not about empowering users to style their pages more quickly, heh

zombot 7 hours ago [-]
More bullshit from the masters of the bullshit generator.
whynotmaybe 19 hours ago [-]
Age detection was already very effective with Leisure Suit Larry 3 age questions.

https://allowe.com/games/larry/tips-manuals/lsl3-age-quiz.ht...

tadfisher 14 hours ago [-]
That's how I learned the meaning of "prophylactic" at age 10, in front of a neighborhood friend's 286.
amarant 19 hours ago [-]
Some of those questions are highly regional. Like the W-4 being a tax form... Not where I'm from it's not!
12 hours ago [-]
15 hours ago [-]
accrual 15 hours ago [-]
That's really funny and clever that this was a thing, thanks for sharing. I never had the fortune of trying to guess my way through these.
curtisblaine 18 hours ago [-]
Wait, I don't understand this. Does it mean that they can erroneously predict I'm a minor, covertly restrict my account without me knowing? I guess it's time to cancel my subscription.
samename 18 hours ago [-]
Yes, but it's to protect the children! ;) Just scan your face in the app and your restrictions will be lifted.
sodafountan 19 hours ago [-]
I just asked ChatGPT. "Based on everything I've asked, how old do you think I am?" It was dead-on with its answer. It guessed 30-35. I'm 32.

That was just a spur-of-the-moment question. I've been using ChatGPT for over six months now.

Dilettante_ 14 hours ago [-]
>It was dead-on with its answer. It guessed 30-35. I'm 32.

Horoscopes must feel like literal magic to you.

prox 19 hours ago [-]
But did it tell on what basis it made that assumption?
sodafountan 19 hours ago [-]
Yes, it listed a few of my past questions as reference points. I had asked some questions about Nintendo 64/Dreamcast and Gamecube games. It also used information I had asked it about programming languages and some work-related questions to guess my age.

I don't know how OpenAI plans to do this going forward, just quickly read the article and figured that might be a good question to ask ChatGPT.

Edit: I just followed that up with, "Based on everything I've asked, what gender am I?" It refused to answer, stating it wouldn't assume my gender and treats me as gender neutral.

So I guess it's ok for an AI agent to assume your age, but not your gender... ?

I don't really feel like diving into the ethics of OpenAI at the moment lol.

knallfrosch 9 hours ago [-]
Age is a must. Gender is too easy. You know what the problem is? These systems know your ethnicity and religion.
prox 18 hours ago [-]
If they are shielding you giving back answers, doesn’t mean there is a lot of profiling going on behind the screens of all big tech. How close are they to behavioral monitoring?
reducesuffering 19 hours ago [-]
In case it wasn't clear LLM conversations are being analyzed in a similar way to the social media advertising profiles...

"Q: ipsum lorem

ChatGPT: response

Q: ipsum lorem"

OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $

afpx 19 hours ago [-]
OpenAI are liars. I have all the privacy settings on, and it still assumes things about me that it would only do if it knew all my previous conversations.
footy 19 hours ago [-]
sounds like a good reason to stop using it no?
gostsamo 19 hours ago [-]
Let's be honest - to protect the children, big tech will put everyone under the suspicion of being one. And the issue is not how they use the technologies they have, because they have a moral responsibility to do it safely, but that we don't have technologies of hours.

What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.

The winter is coming and we are short on thermal underware.

The chinese open models being reason for hope is just a very sad joke.

greatgib 10 hours ago [-]
> typical times of day when someone is active, usage patterns over time,

> Users [...] will always have a [...] simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.

Nice, so now your most secret inner talk with the LLM can be directly associated with face and ID. Get ready for the fun moment when Trump decide that he needs to see what are your discussions with the AI when you pass the border or piss him off...

knallfrosch 9 hours ago [-]
20 years of Google searches, iCloud calendar events, Microsoft EMails, Netflix viewing and Amazon purchases have doomed you already.
Barrin92 19 hours ago [-]
well I hope it's better than Spotify's age prediction which came to the conclusion that I'm 87 years old.

Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze

mmooss 11 hours ago [-]
Their teen content restrictions

> ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:

  * Graphic violence or gory content
  * Viral challenges that could encourage risky or
    harmful behavior in minors
  * Sexual, romantic, or violent role play
  * Depictions of self-harm
  * Content that promotes extreme beauty standards,
    unhealthy dieting, or body shaming
That wording implies that's not comprehensive, but hate and disinformation are omitted.
yunohn 19 hours ago [-]
This is 100% for advertising, not user safety.

It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.

samename 19 hours ago [-]
Are you suggesting regulations designed to protect children actually require more data collection and enable the surveillance economy?
yunohn 9 hours ago [-]
Yes…? Even the EFF thinks so - https://www.eff.org/issues/age-verification

If this was for safety, they could’ve done it literally any time in the past few years - instead of within days of announcing their advertising plans!

hexbin010 19 hours ago [-]
How do I block these ads?
samename 19 hours ago [-]
Stop using ChatGPT or pay $20/month (for now). Alternatively use the APIs instead
hexbin010 17 hours ago [-]
I meant the ads submitted to HN
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:59:23 GMT+0000 (Coordinated Universal Time) with Vercel.