• 0 Posts
  • 17 Comments
Joined 2 months ago
cake
Cake day: August 10th, 2025

help-circle
  • 190 min read

    Hoo fuckin’ boy

    I do have to say that including Hyprland in the original post is silly. Hyprland had a problem with ignorant users making ignorant speech in their community, and they didn’t police it, and once a shitstorm started, they recognized their error and started trying. It’s fine. Putting them up top, and Omarchy which is literally lead-developed by an open fascist, down below, is weird.

    Holy God it’s a shitstorm… reading

    I just want to point out there is nothing true in this statement. No such thing as far-right in current government (unless you consider anything right of your beliefs to be far-right, which is just silly), and immigration is completely fine, currently. My wife is an immigrant and non-us citizen and we have many immigrant friends. They are only deporting criminals, illegals, people who have broken the law.

    Fuckin’ hell man. (That quote is from nobody in particular, just some random user chiming in… but still fuckin’ hell man.)

    nrp’s reaction in the reply chain mostly just reads like he hasn’t picked up what’s going on. He still thinks fascism is “a political view” instead of “an active threat that might come for him and people he cares about right now, into his house and his safety, that needs resistance.” I don’t think DHH needs to be in prison or anything unless he’s done something. I do believe in free speech. But yes, the time has come for boycotts, strikes, marches, strengthening the organizations that we’ll need as things continue to get worse. It sounds like nrp is just privileged enough living his tech life that he doesn’t grasp that, and he can’t understand what people are mad about here.

    Nobody remembers Switzerland as unsung heroes of World War 2, because of their commitment to freedom of expression and commerce for any side without prejudice. They’re lucky they got away with it, to be honest.


  • Pretty sure there is a pretty generous window where you can just return the thing for a refund no questions asked. It might be worth looking into.

    Depending on the wording of the return policy, you might even be able to request one and tell them the reason is “The far right has taken over the world’s biggest government and they’re snatching people in the streets. The time to hide support for them behind ‘everyone’s welcome to their opinion’ is over. If at this moment in history you’re not willing to exclude far-right people from your circle, then go fuck yourself, fuck your hardware, give me my money back, in hindsight people should have done this to BMW and IG Farben both before and after the war. I hope you wake the fuck up. You will not be safe indefinitely from them coming for you, unless people braver than yourself stop them before they reach you.”

    Usually I am against bullying people into saying the political views or taking the political decisions you want them to take. You can think they’re wrong about this (as obviously do I, for the reasons stated above) and say so without needing to try to strong-arm them. But, in this case, fuck 'em, for the reasons stated above. Read the return policy first of course to make sure you’re on solid ground, I don’t really know what it is.


  • Yeah, I get it. I don’t think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:

    1. Look at the funny LLM and how far off the rails it goes if you don’t keep it stable and let it kind of “build on itself” over time iteratively and don’t put the right boundaries on
    2. How should we actually wrap up an LLM into a sensible model so that it can pursue an “agent” type of task, what leads it off the rails and what doesn’t, what are some various ideas to keep it grounded and which ones work and don’t work

    And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don’t really do that as a response to an overload / “DDOS” situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.


  • PhilipTheBucket@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    23 days ago

    Initial thought: Well… but this is a transparently absurd way to set up an ML system to manage a vending machine. I mean it is a useful data point I guess, but to me it leads to the conclusion “Even though LLMs sound to humans like they know what they’re doing, they does not, don’t just stick the whole situation into the LLM input and expect good decisions and strategies to come out of the output, you have to embed it into a more capable and structured system for any good to come of it.”

    Updated thought, after reading a little bit of the paper: Holy Christ on a pancake. Is this architecture what people have been meaning by “AI agents” this whole time I’ve been hearing about them? Yeah this isn’t going to work. What the fuck, of course it goes insane over time. I stand corrected, I guess, this is valid research pointing out the stupidity of basically putting the LLM in the driver’s seat of something even more complicated than the stuff it’s already been shown to fuck up, and hoping that goes okay.

    Edit: Final thought, after reading more of the paper: Okay, now I’m back closer to the original reaction. I’ve done stuff like this before, this is not how you do it. Have it output JSON, have some tolerance and retries in the framework code for parsing the JSON, be more careful with the prompts to make sure that it’s set up for success, definitely don’t include all the damn history in the context up to the full wildly-inflated context window to send it off the rails, basically, be a lot more careful with how to set it up than this, and put a lot more limits on how much you are asking of the LLM so that it can actually succeed within the little box you’ve put it in. I am not at all surprised that this setup went off the rails in hilarious fashion (and it really is hilarious, you should read). Anyway that’s what LLMs do. I don’t know if this is because the researchers didn’t know any better, or because they were deliberately setting up the framework around the LLM to produce bad results, or because this stupid approach really is the state of the art right now, but this is not how you do it. I actually am a little bit skeptical about whether you even could set up a framework for a current-generation LLM that would enable to succeed at an objective and pretty frickin’ complicated task like they set it up for here, but regardless, this wasn’t a fair test. If it was meant as a test of “are LLMs capable of AGI all on their own regardless of the setup like humans generally are,” then congratulations, you learned the answer is no. But you could have framed it a little more directly to talk about that being the answer instead of setting up a poorly-designed agent framework to be involved in it.


  • PhilipTheBucket@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    23 days ago

    Yeah it’s a bunch of shit. I’m not an expert obviously, just talking out of my ass, but:

    1. Running inference for all the devices in the building to “our dev server” would not have maintained a usable level of response time for any of them, unless he meant to say “the dev cluster” or something and his home wifi glitched right at that moment and made it sound different
    2. LLMs don’t degrade by giving wrong answers, they degrade by stopping producing tokens
    3. Meta already has shown itself to be okay with lying
    4. GUYS JUST USE FUCKING CANNED ANSWERS WITH THE RIGHT SOUNDING VOICE, THIS ISN’T ROCKET SCIENCE, THAT’S HOW YOU DO DEMOS WHEN YOUR SHIT’S NOT DONE YET

  • I think the crisis of Trump is likely to be worse than any crisis in the Western world for the last 50 years. I think the closest analogue is probably the collapse of the USSR. So yes, some of the rich people upped their wealth by orders of magnitude, and honestly you might be right that Zuck might manage to be one of that category, but also some of them lost everything or got thrown out windows, or had to survive in reduced capacity within their new walled fortresses in the horrifying new meta. I feel like more likely is that the MAGA world will remember Facebook censoring their posts about ivermectin, and not feel like Zuck needs to have a seat at the table, no matter how many ass-kissing sessions he shows up at the White House to do.

    For example I feel like breaking up Meta and mandating Truth Social and TikTok as the only new sanctioned social media going forward might be one possible outcome. It’s kind of hard to say and I won’t swear that you’re definitely wrong that he might come out way ahead in the end. I’m just saying that this type of crisis is a very different type of crisis.









  • PhilipTheBucket@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    2 months ago

    Honestly, man, I get what you’re saying, but also at some point all that stuff just becomes someone else’s problem.

    This is what people forget about the social contract: It goes both ways, it was an agreement for the benefit of all. The old way was that if you had a problem with someone, you showed up at their house with a bat / with some friends. That wasn’t really the way, and so we arrived at this deal where no one had to do that, but then people always start to fuck over other people involved in the system thinking that that “no one will show up at my place with a bat, whatever I do” arrangement is a law of nature. It’s not.