In the usa it’s based on profession - Medical professionals, therapists, and public servants like Teachers are mandated reporters, so if they have been proven to be derelict of duty, they are punished.
There is no such requirement for private individuals or online service providers though.
I don’t think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn’t 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.
Likely OAI should have a procedure for stepping In and shutting down accounts, though.
ChatGPT told him not to tell anyone and that it was their secret.
It should have literally done anything else. If you search suicide on Google or bing etc you get help banners and support etc.
You would think the bare basics of any system from a large company to prevent harm and ultimately lawsuits affecting their bottom line, would be something akin to “you appear to want to kill yourself. I’d recommend not doing that and seeking help: call xxx-xxx-xx or visit blahblah.com” etc
deleted by creator
Not sure about USA, but in other countries istigation to suicide is absolutely illegal and punished.
In the usa it’s based on profession - Medical professionals, therapists, and public servants like Teachers are mandated reporters, so if they have been proven to be derelict of duty, they are punished.
There is no such requirement for private individuals or online service providers though.
deleted by creator
I don’t think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn’t 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.
Likely OAI should have a procedure for stepping In and shutting down accounts, though.
ChatGPT told him not to tell anyone and that it was their secret.
It should have literally done anything else. If you search suicide on Google or bing etc you get help banners and support etc.
You would think the bare basics of any system from a large company to prevent harm and ultimately lawsuits affecting their bottom line, would be something akin to “you appear to want to kill yourself. I’d recommend not doing that and seeking help: call xxx-xxx-xx or visit blahblah.com” etc