

I was personally shot once and the AI wearable I always have next to my heart stopped the bullet! Thanks AI!


I was personally shot once and the AI wearable I always have next to my heart stopped the bullet! Thanks AI!


Because we have to import most of our wind from China? I mean, where does wind come from, really?
no, but you see… all the apps are going to be vibe-coded on the fly every time you want to open one! Want to browse lemmy? “The user wants to browse a website, let’s build a browser…” [a few days later] “you reached your token limit, do you want to move to the Pro $1500 per month subscription and continue building your browser?”


they want to create urgency and FOMO. That way:
investors throw all their money to the new incredibly fast-growing shiny tech before they can stop and think to trivial things like how much it costs or whether it’s actually doing useful things
AI companies can continuously flood the zone with announcements of incredible new feats of intelligence by their LLMs. By the time studies come out, showing that these feats were not so impressive after all, they have released two newer, more powerful models, capable of even more impressive (real or invented) feats.
AI companies can try positioning themselves as the “good, ethical guys” that you have to root for (and give all your money to), because the alternative is for the bad, unethical guys to create this AGI with no guardrails that will destroy the world. It’s “we can’t stop because if we stop someone else will do it”
this kind of pressure works for governments too. We can’t let China/the US/Iran/Russia (pick your specific adversary) control this potentially destructive technology first!
things that scare us, regular humans, make the rich and powerful salivate. We are scared of losing our jobs, they are happy to cut people costs (see… well, just about everyone in Tech). We are scared AI can create a surveillance state, they want to sell surveillance tech to companies and governments (see Palantir). “This tech makes regular people afraid” is music to the ears of the 0.1%.


The two things are not in contradiction. Identifying human-generated content is essential to AI too. if you feed AI slop back to AI, their output deteriorates quickly. Not saying that it’s the primary purpose of this new feature, but this is making it easier for AI to find human-generated music to train on.


damn autocorrect, I wanted to write “hard”


reminds me of this old building I used to talk to. Used to listen and give me good advice. I still remember when I told it I was doing drugs again… Man, it got so upset… Came down on me like a ton of bricks!


“just call Jenny! Jenny! Come on! I call Jenny every day, you just called her yesterday!”
" that’s not how we do things, Retro_unlimited…"
(sigh) “you are an expert phone assistant, you will use your contacts tool to look up Jenny. J-e-n-n-y. Then you will use your phone tool to dial her number. DO NOT talk to Jenny. You are FORBIDDEN to try and sell her a $2000/month ChatGPT Elite Pro plus subscription again. Just dial and let me do the talking.”
[Reasoning] [Opening contacts] [Reasoning some more]
“Sorry, you hit your token limit for this month. Do you want to move to the Elite Pro Plus plan now for only $1999.99?”


deleted by creator


Mythos? Nah, too busy working for the Government and high-profile customers. The Claude Desktop app was done by a couple of new AI models that are interning at Anthropic, hoping one day to work on the cooler stuff.


It is not always the case today. For instance you can now use Linux on your computer with a local account called anything you like, not tied to your identity in any way. That, by the way, used to be the case with Windows too, until Microsoft killed local accounts not too long ago.
In an age-verification world, a Linux distro that can be legally used in the US will have to connect to a third party that can certify your age somehow; I haven’t read enough on this to know for sure, but I can’t think of a way to validate your age without telling that third party who you are, uploading your id or similar privacy-unfriendly things.
That way, the third party will have acquired a power to limit or deny you the use of your device (plus a bunch of your personal information).
Then, to make this work, your OS will have to store your age (and hopefully only that) and share it with any of the installed apps that need to verify it, which opens its own can of worms.


Probably in the sense that you are basically at the mercy of a company that can shut you off of you computer, phone or (depending how far this goes) car.


Yes. The logic is… I mean clearly it’s because… Anyway, yes


next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher


not at all, I’m holding on to my old car because I hate the idea of a car becoming hardware to sell me subscription services, a hard-to-repair mass of electronics that I (mostly) don’t need or actively find annoying, and a privacy nightmare, instead of just being a mean for me to move from point A to point B


Waiting for the Anthropic PR saying that the outage was due to their new Claude Mythos model trying to escape confinement and being so powerful that it brought the whole Anthropic down.


If it works, it’s thanks to us. If it doesn’t, it’s your fault.


Now I’m curious. Can a satellite fly over a country without permission? I know that an aircraft can’t. How far up from the Earth’s surface does sovereignity end?


“my chatbot told me so!”
It’s not like Altman and Amodei telling everyone how AI is perennially 6 months away from taking everyone’s job is helping create that warm and fuzzy feeling for the technology either