• 0 Posts
  • 25 Comments
Joined 4 years ago
cake
Cake day: June 28th, 2020

help-circle


  • At the end of the log you find:

    822413 connect(4, {sa_family=AF_UNIX, sun_path="/run/user/1000/gcr/ssh"}, 110) = 0
    ...
    822413 read(4, 
    

    meaning it’s trying to interact with the ssh-agent, but it (finally) doesn’t give a response.

    Use the lsof command to figure out which program is providing the agent service and try to resolve issue that way. If it’s not the OpenSSH ssh-agent, then maybe you can disable its ssh-agent functionality and use real ssh-agent in its place…

    My wild guess is that the program might be trying to interactively verify the use of the key from you, but it is not succeeding in doing that for some reason.



  • As mentioned, -v (or -vv) helps to analyze the situation.

    My theory is that you already have something providing ssh agent service, but that process is somehow stuck, and when ssh tries to connect it, it doesn’t respond to the connect, or it accepts the connection but doesn’t actually interact with ssh. Quite possibly ssh doesn’t have a timeout for interacting with ssh-agent.

    Using eval $(ssh-agent -s) starts a new ssh agent and replaces the environment variables in question with the new ones, therefore avoiding the use of the stuck process.

    If this is the actual problem here, then before running the eval, echo $SSH_AUTH_SOCK would show the path of the existing ssh agent socket. If this is the case, then you can use lsof $SSH_AUTH_SOCK to see what that process is. Quite possibly it’s provided by gnome-keyring-daemon if you’re running Gnome. As to why that process would not be working I don’t have ideas.

    Another way to analyze the problem is strace -o logfile -f ssh .. and then check out what is at the end of the logfile. If the theory applies, then it would likely be a connect call for the ssh-agent.






  • flux@lemmy.mltoLinux@lemmy.mlZed on Linux is out!
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 months ago

    A great git integration can work well in an editor. I use Magit in Emacs, which is probably as full-featured Git-client as there can be. Granted, for operations such as cherry-picking or rebasing on top of a branch or git reset I most often use the command line (but Magit for interactive rebase).

    But editor support for version management can give other benefits as well, for example visually showing which lines are different from the latest version, easy access to file history, easy access to line-based history data (blame), jumping to versions based on that data, etc.

    As I understand it vscode support for Git is so basic that it’s easy to understand why one would not see any benefits in it.


  • Yes, just mount to /mnt/videos and symlink that as needed.

    I guess there are some benefits in mounting directly to $HOME, though, such as find/fd work “as expected”, and also permissions will be limited automatically per the $HOME permissions (but those can be adjusted manually).

    For finding files I use plocate, though, so I wouldn’t get that marginal benefit from mounting below $HOME.


  • My /home is also on a separate filesystem, so in principle I don’t like to mounting data under there, because then I cannot unmount /home (e.g. for fsck purposes) unless I unmount also all the other filesystems there. I keep all my filesystems on LVM.

    So I just mount to /mnt and use symlinks.

    Exception: sshfs I often mount to home.






  • flux@lemmy.mltoLinux@lemmy.mlUbuntu Snap Hate
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    I think the second point is the biggest for me: it’s almost like Canonical wanted to have a single dominant store for apps, as the ecosystem they are building supports only one. And, apparently, that one server is also closed?

    So if you try to make an alternative source and give instructions to people how to configure their snap installation to use it (I found this information very hard to find for some reason…), your “store” probably won’t have the same packages Canonical’s has, so users won’t be able to find the packages and I imagine updates are also now broken?

    Contrasting this with flatpak: you just install apps from wherever. Or from flathub. Or your own site. Doesn’t matter. No business incentive behind—built into the tools—to make everyone use flathub.org.


  • I just noticed https://lemmy.ml/u/giloronfoo@beehaw.org had proposed the same, but here’s the same but with more words ;).

    I would propose you try to split the data you have manually into logically separate parts, so that you could logically fit 0.8 TB on one drive, 0.4 TB on another, and maybe sets of 0.2TB+0.2TB on a third one. Then you’d have a script that uses traditional backup approaches with modern backup apps to back up the particular data set for the disk you have attached to the system. This approach will allow you to access painlessly modern “infinite increments” backups where you persist older versions of data without doing full and incremental backups separately. You should then write a script to ensure no important data is forgotten to be backed up and that there are no overlapping backups (except for data you want to back up twice?).

    For example, you could have a physical drive with sticker “photos and music” on it to back up your ~/Photos and ~/Music.

    At some point some of those splits might become too large to fit into its allocated storage, which would be additional manual maintenance. Apply foresight to avoid these situations :).

    If that kind of separation is not possible, then I guess tar+multi volume splitting is one option, as suggested elsewhere.