I was shocked as I went through the source struggling to find any modules that had C. Craziness.
I was shocked as I went through the source struggling to find any modules that had C. Craziness.
There are shitty people on YouTube too, why hate on a platform just because shitty people use either one of them? Beats giving money to YouTube, we have to start somewhere to decentralize more.
https://odysee.com/ – this one is also worth checking out, Louis Rossmann even posts there.
What about Biden signing in the new spying bill recently that enhances wiretapping of US citizens?
I agree, wish this was the actual goal but it’s going to be hard to pry those rights out of their hands.
It’s weird seeing comments that outline the actual problem getting downvoted here more than the superfluous comments that do not address the real problem at all. Bizarroworld.
Would you rather a hostile foreign entity do it instead, who have vested interest in sewing destructive chaos as a goal, though? That’s the alternative.
I’m still on the google prompt bandwagon of typing this query:
stuff i am searching for before:2023
… or ideally, even before COVID19, if you want more valuable, less tainted results. It’s only going to get worse from here, 2024 is the year of saturation with garbage data on the web (yes I know it was already bad before, but now AI is pumping this shit out at an industrial scale.)
I don’t really have to fix anything in Linux, I do a lot of advanced things though (I’m a software dev) where I will manually change executables’ paths, swap them out with symlinks, use custom newer GCC compilers, etc, but even with all of that I still rarely ever have to “fix” anything. I have been waiting, prepared, for when this Ubuntu install craps out so I can finally wipe it out and switch to Arch for this PC… but it still keeps going and going without a hiccup.
I’m not sure what people are referring to that they have to fix all the time, but no two people have the same experience overall obviously, and there are so many variations of a linux system. like take 10 different desktop environments or window managers or different pieces of software or hardware and every permutation is going to have either more problems, or less problems.
Ultimately I would recommend anybody just giving all of the distros and DE/WMs a try. A good try, give it a few weeks and see how each of them feel, you’re not going to know what you’ve been missing, or if anything ever has bugs or quirks at all period, until you do.
Try phind.com, it’s got an insanely advanced model trained on a ton of their own proprietary code, and free too (or paid with more features and more prompts per day, etc.)
I doubt that was intentional, they would likely want to hide that latency but the CPU time required to scan everything just is what it is.
https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b
The hooked RSA_public_decrypt verifies a signature on the server’s host key by a fixed Ed448 key, and then passes a payload to system().
It’s RCE, not auth bypass, and gated/unreplayable.
Ohh that makes way more sense, thanks. I haven’t used Debian in like 10 years but it was obviously the same back then too.
The slowness is on purpose? To help identify the sshd in question to the attacker which nodes are compromised? What reason(s) could there be?
They could be more like AMD in that regard, to answer your question:
Direct contributions to Linux kernel: AMD contributes directly to the Linux kernel, providing open-source drivers like amdgpu, which supports a wide range of AMD graphics cards.
Mesa 3D Graphics Library: AMD supports the Mesa project, which implements open-source graphics drivers, including those for AMD GPUs, enhancing performance and compatibility with OpenGL and Vulkan APIs.
AMDVLK and RADV Vulkan drivers: AMD has released AMDVLK, their official open-source Vulkan driver. In addition to this, there's also RADV, an independent Mesa-based Vulkan driver for AMD GPUs.
Open Source Firmware: AMD has released open-source firmware for some of their GPUs, enabling better integration and functionality with the Linux kernel.
ROCm (Radeon Open Compute): An open-source platform providing GPU support for compute-oriented tasks, including machine learning and high-performance computing, compatible with AMD GPUs.
AMDGPU-PRO Driver: While primarily a proprietary driver, AMDGPU-PRO includes an open-source component that can be used independently, offering compatibility and performance for professional and gaming use.
X.Org Driver (xf86-video-amdgpu): An open-source X.Org driver for AMD graphics cards, providing support for 2D graphics, video acceleration, and display features.
GPUOpen: A collection of tools, libraries, and SDKs for game developers and other professionals to optimize the performance of AMD GPUs in various applications, many of which are open source.
deleted by creator
I think it comes down to the tens of millions of dollars that the reddit executives sold out to. It’s easy to not care when someone is throwing $100 million at you. Also: fuck spez.
There’s probably even a ‘sentiment’ tracking system to automatically remove negative comments at this point.
I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.
https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small
This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.
But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.
You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.
I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)
Sweet, do you have any links on how to set that up? My next goal is to set up my own lemmy.<mydomain> instance up so I can pull various things for my own aggregation. Last I tried, I had errors after the Rust compiling steps, need to try it agian.