The GMEโ„ข๐Ÿ™ˆ๐Ÿ™‰๐Ÿ™Š is a user on toot.zone. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.
I think we've lost the battle of package management. I don't see how we can dig ourselves out of this hole. Rust, Node, Go, Python, Elixir, Ruby... Everything rolled their own package management, virtual environments, the ability to pin/lock to specific versions of code. All of this is wholly incompatible with traditional package management.

The battle is over. The next generation of programmers reinvented the wheel and it's full of razorblades. I think we should focus our resources on containing this mess with jails/containers. There's no other viable solution. It's our grim reality now.
The GMEโ„ข๐Ÿ™ˆ๐Ÿ™‰๐Ÿ™Š @gme

@feld yet your avatar is Beastie and FreeBSD has the best package management system on Earth. /usr/ports

ยท Tusky ยท 1 ยท 2
@gme Yep, I'm a FreeBSD ports developer // on the core (portmgr) team.

I've spent the afternoon trying to package something (Go+Rust) that has so many layers deep in the developer's build framework of expecting to be able to fetch data from the internet and from Github/Crates.io that there's clearly no end in sight.

Proper cleanroom build environments like FreeBSD uses for packaging requires everything be provided with the application source code and no internet access be available during build time.

Sadly this is completely incompatible with modern workflows. You end up chasing down thousands of dependencies by hand, trying to get them all in the right place for the build, and just when you think you've solved the problem the build process runs a command that expects to be able to query a website.

Fucking insanity, man.
@gme There will be untold amounts of regret the day Github, for example, goes offline permanently and people realize they can no longer build their own software because they don't technically have all the source code for their application on their servers and the build process *requires* information from a third party website.
@feld @gme
> incompatible with traditional package management
> source code and no internet access
they all require to have network access to distro repos to be of any use past the initial install

distributed systems are the only way. torrent yourself a 5 TB of `node_modules` and share that HDD on your dialup bbs.

@feld @gme i haven't checked back in a while, but this same conversation was playing out over on the debian mailing lists not so long ago - spurred by npm in particular, but it feels like it's the same problem across the board. giant collections of tiny dependencies with giant collections of their own tiny dependencies managed within piles of ad hoc, half-assed, mutually antagonistic language-specific package managers and build tools.

@brennen @feld If your language and framework of choice requires such "piles of ad hoc, half-assed, mutually antagonistic language specific package managers and build tools" then you deserve what happens when your proprietary infrastructure eventually fails.

@gme @feld that's an understandable sentiment in a way, but what is really happening is that we are _all_ going to get happens when these systems fail - we sort of already are - because increasingly, these are all of the systems.

and mostly, we don't deserve it.

@gme @feld "get _what_ happens" when these systems fail, i meant to say.

@gme @feld @brennen welcome to the world of Android development.

Many years ago, when Gradle was new I was at a Google presentation and I was having problems figuring out which jars were needed for a certain feature, so I asked how to deal with that.

The answer was essentially "just put the dependency in Gradle all all will be automatic". I asked what to do if you didn't want Gradle, and they didn't have an answer.

@loke
Now, Debian has the android SDK available as own packages. This gives you far more flexibility to build the packages the way you like. You don't need Google any more to build android packages.
See wiki.debian.org/AndroidTools

@brennen @feld @gme

@Billie @gme @feld @brennen That would have been useful back then, but now things have become even worse. It's not even possible to build an Android application without Gradle anymore.

Well, it might be theoretically possible. The tools are all there, but it's completely undocumented.

@loke @brennen @feld @gme
I know this *will* sound stupid to you, but after 1-2 days of trying around I meanwhile find it quite comfortable to work with gradle.

At first sight, I thought "why the *** gradle", but now I am used to it ๐Ÿ˜ƒ

@Billie @gme @feld @brennen actually I don't find it stupid. I don't doubt that you can get comfortable with it. What I was trying to say was that Android development using IntelliJ IDEA used to be incredibly smooth, but as soon as Gradle and Android Studio (which is just a licensed version of IntelliJ IDEA) appeared the experience got quantifiably worse. And it's still worse, even to this day.

@loke @brennen @feld @gme
Are you on linux?
If yes:
1) Take an up-to date debian distro, e.g. Ubuntu 17.10
2) Install the android sdk from the debian packages
3) Install IntelliJ IDEA
4) Install gradle (e.g. version 3.2.1 in /opt)

Configure IntelliJ IDEA to use 2-4 in the project settings.

The result is a perfectly working android ide without any proprietary stuff from google. Works like a charm. I can confirm it since I code android apps with this setup!

@feld @gme

Heh. In fact this very problem happened to us at that same early PaaS company. No GitHub, no deployment.

GH Enterprise solves that particular problem but it's expensive and we're still left with hundreds or thousands of third-party dependencies to fetch.

@feld @feld @gme This is why GitHub needs a distributed data store compeditor.

@profoundlynerdy @feld GitLab built on top of IPFS. That would be pretty freakin' cool.

@gme @feld Yes, it would!

Side question: isn't IPFS dependent upon DNS to work on some level?

@feld @gme with #node and #rust you can host your own repository. I don't know about the others.

@feld @gme lets see the other side: how about you selfhost everything. now there are 2 things may happen: your thing gets insanely popular over night and your servers cant handle the load anymore. or you are broke and cant continue paying for your server.it is more likely your server goes down (unless you are a big company) then something like github, gitlab, bitbucket and the likes.

@kura @feld That's not how the FreeBSD ports and packaging system works I'm afraid.

@kura @gme ... but that's not the other side? The other side is "You package your software in tarballs and release by uploading to mirror sites".

Github tags are not releases. The checksums of the downloaded tarballs change based on region and whether or not the Github mirror has voided its cache. You can't require the world to checkout code from a repo. This is against all principals of release engineering, which clearly they do not teach at university.

@feld @gme ah okay, the toot i read sounded like a complaint against github
i agree that tags arent releases (you can use a tag to mark a release), github itself even provides a system of creating "static" releases (e.g. github.com/DiscordInjections/D ). you can create release tarballs that arent changing. github itself is nice, but you shouldnt use your git repo as a way to release stuff (github or not).
sorry for the misunderstanding.

@gme @feld @kura you can't go down if your selfhosting is 100% torrents

@SoniEx2 @kura @feld Which would prevent me (and many others) from installing or updating any package served over Bittorrent at Work since the protocol is blocked by my Organization's firewall.

@SoniEx2 @gme @kura how? Isn't it the upload speed of the seeders and the amount participating in the swarm?

@feld @kura @gme I personally run a seedbox on my server. it means you use my server as a peer as a fallback.

if there are few ppl on the download, most of the load is on my server. if everyone is hammering the downloads, most of the load will be evenly distributed.

I recommend most ppl should run p2p mirrors for software they care about. I know I have an over 300 ratio for ubuntu 16.04.3 desktop amd64.

p2p mirrors are used automatically, unlike garbage rsync/http mirrors.

@SoniEx2 @gme @feld im not arguing against it, but many, especially company networks block p2p traffic for security reasons. be it down or uploading illegal content or suffocating the bandwidth. its easier to qos http/s traffic over blind p2p traffic. thats why most company networks (and some isps) blocked irc and dcc back in the days. and after that usenets (though both have lost its values nowadays)

@kura @feld @gme so qos http/s traffic so it has higher priority than uTP traffic. (even tho it already does.)

@SoniEx2 @feld @kura Doesn't remove a Company's liability for copyright infringement when that college intern you brought on has decided to download the latest season of Game of Thrones using your OC192 pipes or worse sets up a seedbox under his desk.

@gme @kura @feld good. setup your own internal private DHT thing that can filter everything ppl request. setup a proxy node that downloads things the DHT filter allows. only allow p2p connections to that proxy node.

@SoniEx2 @feld @kura LOL. OK. Get that past your CIO, CTO, Networking, IT Operations, Legal, and Security.

@feld @kura @gme companies are evil. free software should be torrent-only and AGPLv3.

@feld Can I just say "thank you"? You and the entire ports team are literally my heroes. I mean, being able to cd into /usr/ports/...../foo and type `make all install clean` and have every last recursive dependency compiled is like a sysadmin's wet-dream. First thing I do on a fresh FreeBSD system is find konqueror and make all install clean because that compiles KDE, X windows, and everything else under the sun for me downloading the sources directly. It's how I break-in every new system.

@feld
It is insane. MITM proxy, local self-signed certificate, intercept all the https traffic and cache it?
(yes, it's even more insane)
@gme

@feld @gme I read that toot a couple of times now and the gravity of it just hit me

Woah!

@feld @phessler People now use containers. You can hide all the dirty crap in a box and call it a package

@phessler @jaj wholeheartedly agree. This is what happens when developers get root access to machines in your environment.

@feld @gme If you haven't already, reach out to the Rust community. I think Fedora has this same problem and has solved it somehow, though I don't know the details. Either r/rust or the Rust Discourse instance would be good choices; I think there's already a packaging thread on the latter.