The one I grabbed to test was the ROG Azoth.
I also checked my Iris and Moonlander - both cap out at 6, but I believe I can update that to be higher with QMK or add a config key via Oryx on the Moonlander to turn it on.
The one I grabbed to test was the ROG Azoth.
I also checked my Iris and Moonlander - both cap out at 6, but I believe I can update that to be higher with QMK or add a config key via Oryx on the Moonlander to turn it on.
Per this thread from 2009, the limit was conditional upon using a particular keyboard descriptor documented elsewhere in the spec, but keyboards are not required to use that descriptor.
I tested just now on one of my mechanical keyboards, on MacOS, connected via USB C, using the Online Key Rollover Test, and was able to get 44 keys registered at the same time.
I’d just like to interject for a moment. What you’re referring to as Alpine Linux Alpine Linux is in fact Pine’s fork, Alpine / Alpine Linux Pine Linux, or as I’ve taken to calling it, Pine’s Alpine plus Alpine Linux Pine Linux. Alpine Linux Pine Linux is an operating system unto itself, and Pine’s Alpine fork is another free component of a fully functioning Alpine Linux Pine Linux system.
16 GB of RAM, though? Is it even optimized for the Ryzen 9950X3D?
And a 4 TB SSD - not even necessarily NVME?
Doesn’t seem high powered to me.
Do you mean like a FOSS version of https://soundiiz.com/transfer-playlist-and-favorites?
Or at a song/album level, a FOSS version of https://odesli.co/?
Oh 100% agreed - in this instance, it’s clear that OBS has a well maintained package that should be prioritized. But they could keep their repo first and remove OBS (and other known-to-be-well-maintained apps) from it to accomplish that.
They put their repo first on the list.
Right. And are we talking about the list for OBS or of repos in general? I doubt Fedora sets the priority on a package level. And if they don’t, and if there are some other packages in Flathub that are problematic, then it makes sense to prioritize their own repo over them.
That said, if those problematic packages come from other repositories, or if not but there’s another alternative to putting their repo first that would have prevented unofficial builds from showing up first, but wouldn’t have deprioritized official, verified ones like OBS, then it’s a different story. I haven’t maintained a package on Flathub like the original commenter you replied to but I don’t get the impression that that’s the case.
Why did Fedora make their packages take priority? Is it because the priority is otherwise random and if you don’t have a priority set, that leads to the issue they mentioned? Because if so, that sounds like a reasonable action by Fedora and like the real culprit is Flathub.
Clearly they’re cosplaying as a Canonical engineer whose internal explanation and pleas for them to not take this approach fell upon deaf ears /j
If you’re a C developer who doesn’t know Rust, no.
I can’t use signal.
Why? Do you not have a phone number? Is it blocked in your country? Are you legally prohibited from using software with end to end encryption?
You can control that with a setting. In Settings - Privacy, turn on “Query in the page’s title.”
My instance has a magnifying glass as the favicon.
Giant squids are the bears of the ocean
Wouldn’t be a huge change at this point. Israel has been using AI to determine targets for drone-delivered airstrikes for over a year now.
https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip gives a high level overview of Gospel and Lavender, and there are news articles in the references if you want to learn more.
This is at least being positioned better than the ways Lavender and Gospel were used, but I have no doubt that it will be used to commit atrocities as well.
For now, OpenAI’s models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.
Yep, that was how they justified Gospel and Lavender, too - “a human presses the button” (even though they’re not doing anywhere near enough due diligence).
But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.
Yes, OpenAI is well known for this, but they’ve also created other types of AI models (e.g., Whisper). I suspect an LLM might be part of a solution they would build but that it would not be the full solution.
Thanks for clarifying! I’ve heard nothing but praise for Kagi from its users so that’s what I was assuming, but Searxng has also been great so I wouldn’t have been too surprised if you’d compared them and found its results to be on par or better.
By the way, if you’re self hosting Searxng, you can use add your own index. Searxng supports YaCy, which is an actively developed, open source search index and crawler that can be operated standalone or as part of a decentralized (P2P) network. Here are the Searxng docs for that engine. I can’t speak to its quality as I still haven’t set it up, though.
there is a better open source meta search engines
I already use Searxng and have never used Kagi, but I’m curious why you say that Searxng is “better.” Are you saying that because the quality of the searches is better, because it’s open source and Kagi isn’t, or for some other reason?
I’m not the person you responded to, but I can say that it’s a perfectly fine take. My personal experience and the commonly voiced opinions about both browsers supports this take.
Unless you’re using 5 tabs max at a time, my personal experience is that Firefox is more than an order of magnitude more memory efficient than Chrome when dealing with long-lived sessions with the same number of tabs (dozens up to thousands).
I keep hundreds of tabs open in Firefox on my personal machine (with 16 GB of RAM) and it’s almost never consuming the most memory on my system.
Policy prohibits me running Firefox on my work computer, so I have to use Chrome. Even with much more memory (both on 32 GB and 64 GB machines) and far fewer tabs (20-30 at most vs 200-300), Chrome often ends up taking up far too much memory + having a substantial performance drop, and I have to to through and prune the tabs I don’t need right now, bookmark things that can be done later, etc…
Also, see https://www.techspot.com/news/102871-zero-regrets-firefox-power-user-kept-7500-tabs.html - I’ve never seen anything similar for Chrome and wasn’t able to find anything.
Your reply has nothing to do with fair use doctrine.
I think the best way to handle this would be to just encode everything and upload all files. If I wanted some amount of history, I’d use some file system with automatic snapshots, like ZFS.
If I wanted to do what you’ve outlined, I would probably use rclone with filtering for the extension types or something along those lines.
If I wanted to do this with Git specifically, though, this is what I would try first:
First, add lossless extensions (
*.flac
,*.wav
) to my repo’s.gitignore
Second, schedule a job on my local machine that:
.mp3
,.ogg
- possibly also with a confirmation that the codec is up to my standards with a call to ffprobe, avprobe, mediainfo, exiftool, or something similar), it encodes the file to your preferred lossy format.git status --porcelain
to if there have been any changes.git add --all && git commit --message "Automatic commit" && git push
Added album: "Satin Panthers - EP" by Hudson Mohawke
orRemoved album: "Brat" by Charli XCX; Added album "Brat and it's the same but there's three more songs so it's not" by Charli XCX
Third, schedule a job on my
remote machineserver that runsgit pull
at regular intervals.One issue with this approach is that if you delete a file (as opposed to moving it), the space is not recovered on your local or your server. If space on your server is a concern, you could work around that by running something like the answer here (adjusting the depth to an appropriate amount for your use case):
Another potential issue is that what I described above involves having an intermediary git to push to and pull from, e.g., running on a hosted Git forge, like GitHub, Codeberg, etc… This could result in getting copyright complaints or something along those lines, though.
Alternatively, you could use your server as the git server (or check out forgejo if you want a Git forge as well), but then you can’t use the above trick to prune file history and save space from deleted files (on the server, at least - you could on your local, I think). If you then check out your working copy in a way such that Git can use hard links, you should at least be able to avoid needing to store two copies on your server.
The other thing to check out, if you take this approach, is git lfs.EDIT: Actually, I take that back - you probably don’t want to use Git LFS.