this post was submitted on 23 Jan 2025
110 points (90.4% liked)

Selfhosted

41449 readers
1143 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm proud to share a major development status update of XPipe, a new connection hub that allows you to access your entire server infrastructure from your local desktop. XPipe 14 is the biggest rework so far and provides an improved user experience, better team features, performance and memory improvements, and fixes to many existing bugs and limitations.

If you haven't seen it before, XPipe works on top of your installed command-line programs and does not require any setup on your remote systems. It integrates with your tools such as your favourite text/code editors, terminals, shells, command-line tools and more. Here is what it looks like:

Hub

Browser

Reusable identities + Team vaults

You can now create reusable identities for connections instead of having to enter authentication information for each connection separately. This will also make it easier to handle any authentication changes later on, as only one config has to be changed.

Furthermore, there is a new encryption mechanism for git vaults, allowing multiple users to have their own private identities in a shared git vault by encrypting them with the personal key of your user.

Incus support

  • There is now full support for incus
  • The newly added features for incus have also been ported to the LXD integration

Webtop

For users who also want to have access to XPipe when not on their desktop, there exists the XPipe Webtop docker image, which is a web-based desktop environment that can be run in a container and accessed from a browser.

This docker image has seen numerous improvements. It is considered stable now. There is now support for ARM systems to host the container as well. If you use Kasm Workspaces, you can now integrate the webtop into your workspace environment via the XPipe Kasm Registry.

Terminals

  • Launched terminals are now automatically focused after launch
  • Add support for the new Ghostty terminal on Linux
  • There is now support for Wave terminal on all platforms
  • The Windows Terminal integration will now create and use its own profile to prevent certain settings from breaking the terminal integration

Performance updates

  • Many improvements have been made for the RAM usage and memory efficiency, making it much less demanding on available main memory
  • Various performance improvements have also been implemented for local shells, making almost any task in XPipe faster

Services

  • There is now the option to specify a URL path for services that will be appended when opened in the browser
  • You can now specify the service type instead of always having to choose between http and https when opening it
  • There is now a new service type to run commands on a tunneled connection after it is established
  • Services now show better when they are active or inactive

File transfers

  • You can now abort an active file transfer. You can find the button for that on the bottom right of the browser status bar
  • File transfers where the target write fails due to permissions issues or missing disk space are now better cancelled

Miscellaneous

  • There are now translations for Swedish, Polish, Indonesian
  • There is now the option to censor all displayed contents, allowing for a more simple screensharing workflow for XPipe
  • The Yubikey PIV and PKCS#11 SSH auth option have been made more resilient for any PATH issues
  • XPipe will now commit a dummy private key to your git sync repository to make your git provider potentially detect any leaks of your repository contents
  • Fix password manager requests not being cached and requiring an unlock every time
  • Fix Yubikey PIV and other PKCS#11 SSH libraries not asking for pin on macOS
  • Fix some container shells not working do to some issues with /tmp
  • Fix fish shells launching as sh in the file browser terminal
  • Fix zsh terminal not launching in the current working directory in file browser
  • Fix permission denied errors for script files in some containers
  • Fix some file names that required escapes not being displayed in file browser
  • Fix special Windows files like OneDrive links not being shown in file browser

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

Outlook

If this project sounds interesting to you, you can check it out on GitHub or visit the Website for more information.

Enjoy!

top 42 comments
sorted by: hot top controversial new old
[–] [email protected] 46 points 3 days ago

Interesting

closed source

And I noped out right away, especially if it has to run on my servers

[–] non_burglar 16 points 3 days ago (2 children)

What is your target audience for this? I'm having trouble understanding who this product is for.

[–] [email protected] 6 points 3 days ago (1 children)

Anyone who manages some kind of servers, virtual machines, containers, etc. That can be in your homelab or also at your job if you are doing that professionally. So assuming that you are selfhosting something, you can get some use out of this. And the more stuff you selfhost and have to manage, the more useful it becomes.

[–] non_burglar 12 points 3 days ago* (last edited 3 days ago) (2 children)

I appreciate the reply, but I guess I wasn't clear on what I was asking.

It's obvious who this is for in the literal sense, what I mean is: what is the use case for this?

On the homelab front, I don't see enough need to unify my GUI access, and i have roughly 30 containers to manage. At that point, most homelab admins gravitate to automation.

On the professional front, I can tell you that unifying the keys to mgmt interfaces to critical infrastructure in a single app is not a welcome tool to see on my junior admin desktops. And if it's simply the interface to mgmt portals without storing keys, then I would have my doubts about a junior admin who hasn't developed a personal strategy to manage this themselves.

Don't get me wrong, I'm happy to encourage you to develop this, but the second you write "trying to make a living from this", you should know that these questions are coming.

If I were across the table from you trying to understand what you're selling me, I would want to know:

  • how do you handle secrets in transit and at rest?
  • can I deploy this once and set access for various departments or employees?
  • can I find out who has been using the tool?
  • how does the app handle updates?

You can see where this is going. If I buy this tool for use by several people, I don't want to have to wrap it in vault entries and update scripts just to meet compliance with my client's environment.

[–] [email protected] 5 points 3 days ago* (last edited 3 days ago)

So the vision is that this is only a connection hub, essentially a mediator that brings together your tools like terminals, editors, command-line clients and more. XPipe itself doesn't have an SSH client, it just uses your locally installed one. Same goes for text editors, terminals, password managers, git clients, browsers, and more. It doesn't replace anything, it works with your tools.

About unifying GUI access for your homelab, I guess that is personal preference. Some people like a gui-based workflow, some do like a more terminal focused experience. But with XPipe you can get both. You can use it as a quick terminal launcher if you don't want to use any of the other GUI functionality. For example, if you are a frequent SSH user, see my other reply: https://sh.itjust.works/post/31552343/16245994 on how it can make your life easier. You can try it out for a few minutes to see how it works for you, you can get started very quickly and there is no setup required on any servers. There's no commitment here.

If you like automation, there is also a built-in HTTP API (which you have to enable first). You can automate almost anything with that. The documentation for that is available here: https://github.com/xpipe-io/xpipe/blob/master/openapi.yaml and if you like python, there is also https://github.com/xpipe-io/xpipe-python-api

For the professional use case, the same concept of a connection hub apply here. XPipe doesn't manage your keys, you can use whatever storage format or SSH agent configuration you want. If you use a password manager in your organization, you can connect that to XPipe and have XPipe itself not store any secrets. In terms of transit security, it just forwards everything to your locally installed SSH client for example. If you care about all the security details, you can find them at https://xpipe.io/assets/documents/Security%20in%20XPipe.pdf .

You can deploy this in your organization with whatever tools you use. Maybe the .msi with intune, or some other management tool for Linux and macOS. There are standard installers available for every use case. These can also handle updates, so if you disable automatic updates within the app and instead want to manage that yourself, you can use the installers to upgrade installations in-place with the latest releases from GitHub.

About the data storage and usage, if you want to use shared vaults in your organization, these are all handled via your own git client and git remote repositories. You can host them wherever you want. You get a full history of who did what in that vault with git automatically.

[–] [email protected] 1 points 3 days ago (1 children)

On the professional front, I can tell you that unifying the keys to mgmt interfaces to critical infrastructure in a single app is not a welcome tool to see on my junior admin desktops

As opposed to having them spread out? Across multiple apps?

I would have my doubts about a junior admin who hasn't developed a personal strategy to manage this themselves.

What about using a single app to organize their connection methods to various VMs and containers?

[–] non_burglar 1 points 2 days ago

Keys spread out? I don't understand...

[–] [email protected] 3 points 3 days ago (1 children)

Anyone who connects to various servers or workstations, it's extremely helpful.

[–] [email protected] 2 points 3 days ago (1 children)

How long have you been using it?

[–] [email protected] 1 points 3 days ago (1 children)
[–] [email protected] 1 points 3 days ago (1 children)

Nice, any highlights or complaints?

[–] [email protected] 1 points 2 days ago

Not that I can think of in recent versions, they've fixed any issues I've had with it.

[–] [email protected] 12 points 3 days ago (1 children)

What would be a good open source option with these features?

[–] daddy32 2 points 2 days ago (1 children)
[–] [email protected] 2 points 2 days ago

webmin

Shudder

[–] [email protected] 11 points 3 days ago* (last edited 3 days ago) (2 children)

Seemed so nice until I tried to add my very first personal server that has Oracle Linux distro on it and it paywalled me immediately. So if you want it for personal use but you use the wrong(!) distro on your server, tough luck! You gotta pay for it unless you replace your server with something like Debian I guess. That was the end of it for me. As a constructive feedback: it would be nice to see a list of which distro/server os variants are not paywalled, or which ones are paywalled. For now Asbru will do it for me.

Edit: turns out it's written out on the pricing page in detail. See the comment below.

[–] HybridSarcasm 5 points 3 days ago (1 children)

In fairness, the lack of Enterprise OS connectivity is spelled out in the Pricing breakdown on their website.

[–] [email protected] 4 points 3 days ago

Thank you for the heads up, i was a dork, it's indeed fully listed in the table that's on the pricing page. I'll quote that part for the context. From the Pricing page :

The following systems are classified as commercial operating systems within XPipe and connections to those systems are only possible starting from the homelab plan:

Amazon Linux systems

Oracle Linux systems

The following systems are classified as enterprise operating systems within XPipe and connections to those systems are only possible starting from the professional plan:

Red Hat Enterprise Linux systems

SUSE Enterprise Linux systems

Zentyal systems

Windows Enterprise systems

Windows Azure systems

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago) (1 children)

Yeah that is implemented under the assumption that these distros are most of the time used in enterprise contexts. I know that this is not always the case, there is the option to upgrade to a license at no additional cost to the next tier if you're only using it for personal use. Just send me an email I can upgrade it for you.

And out of curiosity, is there a particular reason why you chose Oracle Linux for your personal server?

[–] [email protected] 3 points 3 days ago (1 children)

Yeah I saw that option to offer free upgrade for the claimed personal use and that's nice. It's also just fine for paying such a product as a whole. I was just frustrated for not being able to try it with a single server.

Reason for Oracle Linux is my Linux journey pretty much started and continued with rhel based distros, be it Mandrake(yeap good old Mandrake) at home at first then actual redhat subscription in the research center I volunteered and mostly centos on my servers as well as fedora as my workstation OS.

After Centos upstream change, I started using Oracle and it's nice and stable. As far as the explanation on the product page goes I guess anything that looks like rhel (like Rocky) will also ring the enterprise bells.

Thankfully most hobbyists like raspi users will go with Debian based stock OS or use something like Ubuntu server version so they'll be fine with free version of xpipe.

[–] [email protected] 1 points 3 days ago (1 children)

I see. About other RHEL distros like Rocky, these are available for free in xpipe. Is just limited to very specific distros like RHEL itself and Oracle Linux as there's usually an enterprise reason why those are chosen.

[–] [email protected] 2 points 3 days ago (1 children)

Speaking of the enterprise and free features, what does the open core only version provide when compared to the binary releases? Can the core component that has the source code release be used as is alone?

[–] [email protected] 2 points 3 days ago (1 children)

The open core version provides almost all features in the community edition. It is not completely there yet, because in practice some components are difficult to separate or to include in the source since everything is in the same build. E.g. the whole licensing code is present in the community build as you can upgrade the license in-place, but that code is not part of the public source code and in a completely standalone build, this part is still required.

So it's currently not fully possible to release the core component as is alone, but if you clone it and run in your own development environment, any components that are not included but required are fetched from an existing xpipe installation on that system. So cloning the community repository and running the dev build works fine.

[–] [email protected] 1 points 2 days ago

Great then, thank you for the explanation.

[–] just_another_person 9 points 3 days ago (2 children)

I'm still very confused on this project and it's aim...

So it's a GUI front end for all the other system utilities you would need to install first in order to make it work properly, right? So then...like...why all the overhead if it's just literally opening up the tool you intended to use anyway? Is it actually opening a new CLI window an SSH connection to a server, for instance? It seems like more steps to open this, go through multiple clicks to find the connection you want, and then get plopped back into an SSH session versus just typing 'ssh [email protected]'.

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago) (1 children)

If you are looking for key points from the perspective of a heavy ssh CLI user, you can think of it as a fancy wrapper around your existing SSH client and configuration. It will automatically detect your SSH config and supports exactly the same set of features and options as your SSH client as it internally uses that one. It doesn't try to replace your existing SSH client and configuration, it works with it.

What it will add:

  • You have direct access to all systems running on the servers you connect to, e.g. docker containers, using the exact same graphical interface. On the CLI you also have that in theory, but that's tedious

  • You can bring your shell environments / init scripts / aliases with you in a noninvasive way. I.e. you don't have to modify the remote system dotfiles, when you connect through xpipe it will set up any scripts you want to have available automatically

  • You can link up your password manager with your SSH client and other connection methods that require passwords

  • You have the ability to synchronize your connections and environments through git, including your SSH configs

  • You get special integration for SSH tunnels that allows you to toggle them to start / stop in the background and open tunneled services in the browser automatically

  • You get an overview over all your remote connections and can access the file system of any connected remote system via a uniform graphical user interface, allow you to use your own desktop text editors instead of terminal-based ones. It also supports dynamic sudo elevation, so you can also save files as root without having to login as root

  • Plus all the integrations for other tools as well. For example, you want to connect to a certain VM guest in in a hypervisor via SSH but it is not reachable from the outside? XPipe can connect you to it through the hypervisor host, automatically determine IP addresses, and open a terminal session instantly

[–] just_another_person 2 points 3 days ago (1 children)

I'm really not trying to diss on your project, but all of this is just really the default for a normally configured SSH client in a proper ecosystem. MAYBE this is somewhat useful for beginners, or Windows users I guess. The only thing in there that sort of seems to improve a workflow might be the tunneling, but even then I don't see it actually saving time.

I appreciate you taking the time to reply, but I guess I'm just not going to understand the target user here unless they are absolute beginners and unfamiliar with how all of this fits together.

[–] [email protected] 4 points 3 days ago

Yeah most of the things listed can be done with any command-line SSH client, XPipe aims to improve the user experience for these tasks and also make them faster / take less time typing. I would argue you can save quite a bit of time if you use it correctly. And there is support for more than just plain SSH.

I would just recommend you to try it out for like 5 minutes. If you still don't see the point of it, you can just uninstall it and move on

[–] [email protected] 1 points 3 days ago (1 children)

It's an easy way to manage multiple servers/vms remotely. It makes transferring files to remote headless systems easy and simplifies remembering multiple hosts. It's akin to moba xterm, a similar windows only project

[–] just_another_person 1 points 3 days ago (1 children)

Okay, so it is aimed at those unfamiliar with the underlying pieces. That makes a bit more sense.

[–] [email protected] 3 points 3 days ago (1 children)

I wouldn't really say that though. It is aimed to make the whole process require less typing, make it more ergonomic, require less thinking, and speed it up a bit compared to if you're doing it manually. There are plenty of expert options that you can use to fully customize your connections and your workflow.

Among the active users, there are many experienced professionals who use it because it makes their life easier.

[–] just_another_person 1 points 3 days ago (1 children)

A stock SSH session on any modern Linux distro is: agent, config, keyring, and config.

All of these combine to make it as simple as 'ssh user@hostname' with no other typing necessary, depending. Further customizations are all done on ssh session configs, which is what you've done here, but put it on a GUI. I just don't see the benefit except to people brand new to it is all. It's a shortcut of shortcuts, but hey, if people use it, more power to em.

[–] [email protected] 1 points 3 days ago (1 children)

For normal SSH this is all accurate, maybe I should have focused on wider topics.

Staying in the realm of SSH, where the integrations of XPipe produce added value is for example when it comes to virtual machines. If you quickly spin up a VM in a hypervisor such as Proxmox or KVM, it's not that straightforward anymore. If you want to reach a VM running on a remote hypervisor host, you probably have to first use the hypervisor host as a jump server to be able to access the VM and the first place. You have to determine the external IP of the VM (which might be frequently changing), check if any kind of guest agents are available, check whether an SSH server is running (and start it in the VM shell if not). And only then you can type ssh user@host to that VM. XPipe will do that all automatically. So from your perspective, you only click on it and it will perform all these tedious tasks in the background and boot you into a terminal session.

[–] just_another_person 1 points 3 days ago (1 children)

Hey, man. I'm not dogging your project whatsoever here, me,myself, am just trying to understand the use-case, and you've explained in great detail. Much appreciated 👍

[–] [email protected] 1 points 3 days ago (1 children)

Yeah I am still trying to figure out how to explain it the best way to convince people to give it a try

[–] just_another_person 2 points 3 days ago (1 children)

Hey, I totally get it. You built something you like, and you want people to give it a try. Let me give you some hopefully helpful but absolutely unsolicited advice, and feel free to ignore me.

The first thing you need in a project is a target audience: "I am building this to make X thing better."

Then you need a target audience: "Why would people prefer to use this over other existing solutions?"

THEN you need a hook: "This thing is better because X feature."

Now please take this next bit as only constructive criticism, because I'm just trying to help what seems to be burgeoning developer out who has a passion for their product...BUT, I think the confusion you're seeing in this thread is because you're building a thing that doesn't answer any of the above questions for a lot of people. So just digest that, and I'm sorry if it sucks, but the next part is more helpful...

I looked through the code a bit, and just from the exception handling alone, it seems it will break if every little thing about the underlying environment isn't exactly just-so. A version of something gets upgraded, and this might break, for example. Have you considered maybe doing a rewrite to natively load libraries instead of shelling out all the commands? I think it would greatly help the resilience of the app itself from breaking due to environmental changes, AND an added bonus benefit...maybe eventually be able to allow contributions from followers to help adjust code or write plugins.

The reason why most FOSS projects do this is simple: they want it to run in a multitude of places and environments, and the noise generated from everything not being exact about and environment is huge. So instead of relying on shell commands and output, just look up an open library that already does SSH, and write for that. Your code is organized pretty well, so it shouldn't be a huge undertaking, just some learning and doc hounding.

[–] [email protected] 1 points 3 days ago (1 children)

Thanks for taking your time to write this.

I think the main point I'm trying to figure out here is whether this is a communications issue, i.e. how I describe it is not optimal or whether this is a fundamental project issue. Because I think I have a clear vision and target audience, I am part of that audience myself. The thing is, there isn't one standout feature. The value comes from the combination and integrations of multiple features that work together and allow for a smooth use experience. I can say it has support for SSH, docker, kubernetes, hypervisors, and more but all of that on an individual layer isn't that unique, it's the combination that you can use all of them together. But this is difficult to put into words, trying it out for yourself for a few minute usually yields better results.

About the shell commands, that is one of the standout features about it, so it's on purpose. I know this approach is more difficult and error prone than doing some kind of native library stuff, but it also allows me to run the same commands in remote shells on remote systems.

[–] just_another_person 1 points 3 days ago (1 children)

Well, I can say for myself-and others who may see this project who are adept with these types of connections-the question still comes down to "Why would I use this over already existing tooling?"

For me, this is just SSH (which I use daily non-stop) with extra steps. For something like containers...ehhhhh it's a bit of stretch. I'm so used to just running the commands to see what I need, plus I make sure everything has a named DNS, and I can't think of a simpler way to make it easier than what I already do. I feel like remote desktop clients all have this solved in their GUI, so I'll ignore that. Even hitting a button to tell it what I want to connect to is more work than just doing it, honestly, so a GUI does not make sense for me, so I know I'm not a target audience for sure.

The point is that if me can't find a good use for it, and you want me to try it out, what is the feature that would sell me on it. I think the answer to that unlocks a lot of other things you can attack from there.

[–] [email protected] 1 points 3 days ago (1 children)

Alright, I see your points.

Now that you have spent a lot of time discussing it, even looking at the code, one thing that would be valuable for me would be how accurate your expectations are based on what you read here compared to the actual app. If it is pretty much as expected, then I guess at least my summaries are accurate. If it's not, then I can still do a better job at that part. Fundamentally changing the project itself is a little bit too late, but at least the communication can be changed on why people could use it. And I'm not trying to gain a new user here as it's probably not for you, but still would be interesting to me. You can give it five minutes and use the .tar.gz or the .appimage if you don't want to install anything.

[–] just_another_person 1 points 2 days ago (1 children)

I think you might be underselling yourself a bit here. You don't need to rewrite the entire app at all, just a piece at time because of how you it organized. Like this:

Release 2.0: moved SSH to X lib Release 2.1: RPC moved to X lib

And so on.

If you don't wish to do that, then I would work on messaging. From the code and how you're expecting responses, it just won't work for very long with Linux distros, though I appreciate that you made a many portable formats. Instead, have you tried looking at building a library that does discovery of connectable nodes on the local network? That would hit big with the Windows crowd, and possibly some other OS users if suddenly this does "automatic discovery" instead of having to manually create connections.

[–] [email protected] 1 points 2 days ago

Alright, thanks for your insights from an outsider. It is always a difficult task to accurately judge your own projects if you're intimately familiar with it. So I will see what I can do about the things you mentioned

[–] kingblaaak 2 points 3 days ago

this looks pretty cool, i have few servers in random places.