Package manager for already-available relocatable binaries

koala

Ars Tribunus Angusticlavius
7,579
I'm finding more and more software I run which is not available or outdated even in distros such as Nix and Arch which have lots of package and very strong inertia.

Writing arch packages looks reasonable, but I was thinking it's a bit of a waste- many software I'm looking at already provides binaries which can just be dropped to ~/.local/bin and you are done. This is typical of Go software, but increasingly more stuff I use follows this trend.

So I was thinking about writing a kind of package manager where you write a simple manifest like this:

Code:
package: talosctl
kind: github-release
repo: siderolabs/talos
archive: talosctl-{os}-{arch}

And this software locates the latest GitHub release for that repository, gets the right binary, and unpacks it to some pre-configured path.

I'm aware that talosctl is the most favorable case (the binary is right there, it's not packaged into another archive), but I think you could get a ton of software to install with those simple manifests, and keep it very well updated- with less effort than most other packaging systems I'm aware of.

If this system was able to fetch manifests via URL, then it would be very simple to package your own stuff.

Appimages could be added to this too.

Brew is similar, but given that it can install anything, it requires a fixed path for binaries, which means it needs root or dirty tricks.

I think this is a too-obvious idea. I know that it has some serious limitations... but I'm kinda surprised this doesn't already exist?
 

steelghost

Ars Praefectus
4,975
Subscriptor++
This might be a very naïve question, but it is a genuine one - what would your proposed package manager do that git-clone etc doesn't already achieve?

I'm kinda surprised this doesn't already exist?
I'm going to guess that most people don't run so much software from outside their package manager that they have found a need for such a thing, or if they did, they just wrote a few scripts to automate the necessary git commands.
 
Last edited:

teubbist

Ars Scholae Palatinae
823
For purely non-root local installs, eget gets you most of the way there, although it's not stateful so if that really matters to you I guess it falls down a little. But you could just wrap some shell around it to fulfill that need.

On a larger scale, I suspect lack of demand and the fact that many package DSL's allow you to mostly do this anyway. rpmbuild can be made to fetch files from the source URLs, and these days includes a bunch of macros to explicitly support hosted content:


Both commit and tagged methods are supported.

It does require manual version bumps, but as there is no "standard" around Github releases, changelogs, etc. I don't think automatically grabbing the newest version is overly practical in the general case. An example of a project that would lead to breakages would be Victoria Metrics, which has a fairly break neck release cadence for "head" releases, with an LTS train mixed in.
 

koala

Ars Tribunus Angusticlavius
7,579
This might be a very naïve question, but it is a genuine one - what would your proposed package manager do that git-clone etc doesn't already achieve?
git clone gets you the source code. Release binaries are not "in git".

I'm going to guess that most people don't run so much software from outside their package manager that they have found a need for such a thing, or if they did, they just wrote a few scripts to automate the necessary git commands.
I don't know, maybe it's a me problem, but I'm finding that even the distributions with the largest repositories miss stuff I want to run, or lag considerably behind.

On a larger scale, I suspect lack of demand and the fact that many package DSL's allow you to mostly do this anyway. rpmbuild can be made to fetch files from the source URLs, and these days includes a bunch of macros to explicitly support hosted content:

That all is about fetching the source. While you can create RPMs in any manner you like, even by taking upstream binaries and packaging them as RPM archives, that's not a well-supported case.

And anyway, such packages would still be distro-specific. (Although they would work particularly well with alien, for instance.)

Brew, running nix on other distros... fulfill a bit this role.

I might play with this concept.
 
  • Like
Reactions: steelghost

malor

Ars Legatus Legionis
16,093
If you're running Debian, you can usually fetch the current source from Unstable, and build it locally with the 'debuild' system. You may have to add dependencies, but it's really a pretty slick system, and will even generate DEB files for you to install with dpkg. I've seen at least one package where the source files don't actually have any source in them, instead using debuild to download the chosen version from a git repo hosted elsewhere.

I haven't dug deeply into the system, but basically debuild starts with original source, and then has delta patches to configure it to work correctly with Debian pathing. This lets you go in and modify Makefiles. I used to do this for Apache, for instance, because the default Debian binaries didn't support enough threads for a production web server.

There is a downside to this system: you have to track updates manually. That's one of the big features of package management. It looks like your proposed system would have the same problem, so I'd definitely be looking into debuild instead.

I know very little about Arch, except to avoid it because it blows up so often.
 

koala

Ars Tribunus Angusticlavius
7,579
I am not talking about fetching sources. I'm talking about all those (normally implemented in Go) CLI tools like kubectl, or even single-file daemons like SeaweedFS, which publish binaries. They frequently do this with the metadata necessary to do updates (e.g. I can fetch a list of releases from GitHub, know if I have the latest installed or not, request the n-1 version, etc.).

So for installing talosctl, for example, this program would:
  1. Use the GH API to list releases in https://github.com/siderolabs/talos/releases/
  2. Check if the currently installed version (from this package manager's metadata) matches the version we want to install.
  3. Fetch talosctl-linux-amd64 from whatever release (by default the latest one). It could also fetch arm64, or the Windows or macOS, if running on a different OS
  4. Place the binary in ~/.local/bin or a configurable path, and ensure it has +x.
Different packages could be added by adding a declarative file that explains how to fetch the binaries.

It wouldn't be very difficult to add additional logic (e.g. update all packages, but restrict things so only releases older than n days are used, etc.)
 

malor

Ars Legatus Legionis
16,093
I am not talking about fetching sources. I'm talking about all those (normally implemented in Go) CLI tools like kubectl, or even single-file daemons like SeaweedFS, which publish binaries.
If you use existing source-based systems to make binaries, it solves more or less the same problem of existing stable distro binaries being out of date.

But making your own binary repo system? That's just silly. People would be idiots to trust you.

If you're really determined to do system-agnostic distribution, then use one of the container formats. Flatpak is reasonably universal. You'd only need to make a flatpak when the author hadn't already done it. And then, ideally, you'd push the tooling back upstream to the dev and have them generate their own. Trusting the dev is a requirement for using software, so trusting them to package things properly is just a slight extension of the same thing.

Trusting some random joe to package binaries correctly is adding risk, and people will be resistant.
 

koala

Ars Tribunus Angusticlavius
7,579
Well, I made a quick prototype: https://github.com/alexpdp7/ubpkg/ , perhaps it's easier to understand that way.

This downloads EXISTING binaries from the Internet. So this installs the binaries provided by upstream. This is pretty similar to how winget works:


vs. my manifest:


(basically this automates fetching the latest release...)
 

andygoblins

Ars Centurion
230
Subscriptor
ArchLinux was designed with the "I want to install fresh stuff from source" use case is mind.

For ArchLinux, the standard way of handling this is using makepkg. The AUR contains plenty of examples of "-git"packages that automatically pull the latest version from git. The manifests are a bit more complicated than your idea, but they link into pacman so you can clearly uninstall and upgrade without leaving orphan files everywhere. You could build a template for go-based tools without much difficulty.

Alternatively, if go is the main use case, couldn't you just use
Code:
go get
to install/compile straight into user space?
 

andygoblins

Ars Centurion
230
Subscriptor
Yeah, sometimes with ArchLinux you have to go down an AUR rabbit hole to get what you want: need to install A that depends on B that depends on C, and all have to be built from source. For existing AUR packages, you can use existing tools such as pacaur to automatically install all the upstream dependencies. But if you need a lot of obscure libraries that aren't alleady in AUR you'll have to create PKGBUILD files for each one, which may be tedious depending on what you need.

At this point, I think the obligatory XKCD 927 is in order, but if your design solves a problem you have, then that's all that matters, right? If you come up with something better, share it here for the rest of us!
 

koala

Ars Tribunus Angusticlavius
7,579
OK, so I was pointed somewhere else to https://github.com/Rishang/install-release , which is pretty close to what I'm talking about.

And it looks great, but I tried it with some of the software I use and I found some issues...

So I've went ahead and written the worst Rust ever that implements the bare minimum to execute the following package definition:

Python:
gh = github_repo("errata-ai/vale")

release = gh.latest_release()
version_str = release.name().removeprefix("v")

os_str = {
   "linux": "Linux",
   "macos": "macOS",
   "windows": "Windows",
}[os]

arch_str = {
    "x86_64": "64-bit",
    "aarch64": "arm64",
}[arch]

archive_format = {
   "linux": "tar.gz",
   "macos": "tar.gz",
   "windows": "zip",
}[os]

asset = release.get_asset_url("vale_{version_str}_{os_str}_{arch_str}.{archive_format}".format(version_str=version_str, os_str=os_str, arch_str=arch_str, archive_format=archive_format))

install_binary(extract_from_url(asset, "vale"), "vale")

, which is actually Starlark, not Python (Starlark is a Python clone for implementing "sandboxed" stuff).

And it actually works for this single piece of software. But only on Linux, for now. And I might need several weeks just to clean the mess I made...
 
  • Like
Reactions: andygoblins