phpc.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A server for PHP programmers & friends. Join us for discussions on the PHP programming language, frameworks, packages, tools, open source, tech, life, and more.

Administered by:

Server stats:

837
active users

#lxd

2 posts2 participants0 posts today

#Incus is a next-generation #system #container application container, and virtual machine manager. It provides a user experience similar to that of a public #cloud With it, you can easily mix and match both containers and virtual machines, sharing the same underlying #storage and #network - The Incus project was created by Aleksa Sarai as a community driven alternative to Canonical's #LXD linuxcontainers.org/incus/

linuxcontainers.orgLinux Containers - Incus - IntroductionThe umbrella project behind Incus, LXC, LXCFS, Distrobuilder and more.

Its amazing how useful having a break can be.

Last night my brain had completely ground to a halt working on my #incus / #lxd #API client.

I could not, for the life of me, find the search terms to convey my question "how do I verify the remote LXD servers certificate in my #python code".

Today I have the answer and pushed updated code (github.com/medeopolis/containe).

Long story short:
- use supplied tools to create server and client certificates (eg using linuxcontainers.org/incus/docs). You could do it with openssl commands if you really have to
- The resulting .config/{incus,lxc}/client.{crt,key} are for client authentication
- the resulting .config/{incus,lxc}/servercerts/servername.crt can be used for https verification/validation. *to actually use this meaningfully the server cert has to be regenerated, the only addresses covered out of the box are loopback IPs*

I don't know if there is one place I can point to thank for helping me figure my way through but discuss.linuxcontainers.org/ had several useful posts 🙏 , and perhaps a good night sleep.

So I managed to get #Forgejo jobs running in a way that I'm happy enough with. The main concern is that they're isolated as possible. The current runner has a scary warning on it that it's in alpha and shouldn't be considered secure.

The runner instance is Debian running in an #lxd --vm on my laptop (I didn't want to consume resources on the server since it's small). In this instance I have #k0s running, which is a slimmed down #Kubernetes, which is running #Codeberg 's runner pod. This has three containers. One to register the runner, then two others that are always running, the runner itself and #Docker -in-Docker . You can replicate this entire setup so that each runner gets its own DinD which is completely separated from the others.

Each runner checks for a single job in a loop, and once a job is completed then the docker server is pruned, and the runner shuts down. Then k0s will automatically restart it. This isn't perfect but it's better than runner processes which never shut down, because the docker cache will linger and others can look at it to spy on your jobs.

What could be better on a Sunday than throwing more incomplete #sourcecode over the wall and hoping no one throws it back?
This time its a thin as paper wrapper around the #LXD / #Incus REST #API written in #python .
(I'll be trying to use that instead of #pylxd until their websockets bug fixes are in place. Hopefully then I can swap back to pylxd)

github.com/medeopolis/containe

Python library providing a minimal wrapper around the Incus/LXD API. - medeopolis/container_client
GitHubGitHub - medeopolis/container_client: Python library providing a minimal wrapper around the Incus/LXD API.Python library providing a minimal wrapper around the Incus/LXD API. - medeopolis/container_client

After taking the nickle tour of #Qubes, my hasty conclusion is that it is anti-#KISS; there are seemingly many moving parts under the surface, and many scripts to grok to comprehend what is going on.

I plan to give it some more time, if only to unwrap how it launches programs in a VM and shares them with dom0's X server and audio and all that; perhaps it's easier than I think.

I also think #Xen is a bit overkill, as the claim is that it has a smaller kernel and therefore smaller attack surface than the seemingly superior alternative, #KVM. Doing some rudimentary searching out of identified / known VM escapes, there seem to be many more that impact Xen than KVM, in the first place.

Sure, the #Linux kernel may be considerably larger than the Xen kernel, but it does not need to be (a lot can be trimmed from the Linux kernel if you want a more secure hypervisor), and the Linux kernel is arguably more heavily audited than the Xen kernel.

My primary concern is compartmentalization of 'the web', which is the single greatest threat to my system's security, and while #firejail is a great soltion, I have run into issues maintaining my qutebrowser.local and firefox.local files tuned to work well, and it's not the simplest of solutions.

Qubes offers great solutions to the compartmentalization of data and so on, and for that, I really like it, but I think it's over-kill, even for people that desire and benefit from its potential security model, given what the threats are against modern workstations, regardless of threat actor -- most people (I HOPE) don't have numerous vulnerable services listening on random ports waiting to be compromised by a remote threat.

So I am working to refine my own security model, with the lessons I'm learning from Qubes.

Up to this point, my way of using a system is a bit different than most. I have 2 non-root users, neither has sudo access, so I do the criminal thing and use root directly in a virtual terminal.

One user is my admin user that has ssh keys to various other systems, and on those systems, that user has sudo access. My normal user has access to some hosts, but not all, and has no elevated privileges at all.

Both users occasionally need to use the web. When I first learned about javascript, years and years ago, it was a very benevolent tool. It could alter the web page a bit, and make popups and other "useful" things.

At some point, #javascript became a beast, a monster, something that was capable of scooping up your password database, your ssh keys, and probe your local networks with port scans.

In the name of convenience.

As a result, we have to take browser security more seriously, if we want to avoid compromise.

The path I'm exploring at the moment is to run a VM or two as a normal user, using KVM, and then using SSH X forwarding to run firefox from the VM which I can more easily firewall, and ensures if someone escapes my browser or abuses JS in a new and unique way, that no credentials are accessible, unless they are also capable of breaking out of the VM.

What else might I want to consider? I 'like' the concept of dom0 having zero network access, but I don't really see the threat actor that is stopping. Sure, if someone breaks from my VM, they can then call out to the internet, get a reverse shell, download some payloads or build tools, etc.

But if someone breaks out of a Qubes VM, they can basically do the same thing, right? Because they theoretically 'own' the hypervisor, and can restore network access to dom0 trivially, or otherwise get data onto it. Or am I mistaken?

Also, what would the #LXC / #LXD approach look like for something like this? What's its security record like, and would it provide an equivalent challenge to someone breaking out of a web browser (or other program I might use but am not thinking of at the moment)?

Please boost!
Weil ich das Projekt echt super finde:

Wer mehrere Hosts, Proxmox Environments, LXD, Kubernetes oder einfach nur ssh-Verbindungen verwaltet:

Schaut euch Xpipe an!

github.com/xpipe-io/xpipe

Das bündelt alles unter einer bequemen Oberfläche und ist sogar Multiplattform verfügbar!

Die Konfiguration kann bequem über mehrere Instanzen hinweg Betriebssystemunabhängig mittels Git synchronisiert werden.

Ich hab heute in einer Mail auch angefragt wie es mit Incus-
Support aussieht. Das stehe wohl für die nächste Majorversion auf der Roadmap. Die Antwort kam fix und das an einem Sonntag :neocat_laptop_notice:

Open Source made in Germany

Access your entire server infrastructure from your local desktop - xpipe-io/xpipe
GitHubGitHub - xpipe-io/xpipe: Access your entire server infrastructure from your local desktopAccess your entire server infrastructure from your local desktop - xpipe-io/xpipe
#docker#lxd#lxc
Replied in thread

@ktn mich würde interessieren, was dieser ganze #LXD Übernahme Mist letztes Jahr sollte. Und wer da im Management null Ahnung davon hatte, wie OpenSource (Communities) funktionieren. Jetzt gibt es dann halt #Incus und LXD hat seine Entwickler verloren - #Canonical / #Ubuntu hat dadurch halt nichts gewonnen und für User ist's nur nervig.
lwn.net/Articles/937369/
stgraber.org/2023/07/10/time-t
Sorry für den Rant, ansonsten natürlich schöne, spannende Tage in Den Haag :-).
#container #lxc

lwn.netLXD moves into Canonical [LWN.net]

Any opinions on Incus vs LXD?

Apparently the former is a fork of the latter, created because of concerns about Canonical's relicensing of LXD. But I've never heard of it and have no idea how much support it has.

Moving away from Ubuntu because of the snaps fiasco (I've lost LXD images because LXD is installed as a Snap, it's not a theoretical annoyance.) So would be curious to know if Incus is a drop in replacement with future support.

wiki.debian.org/Incus
linuxcontainers.org/incus/

wiki.debian.orgIncus - Debian Wiki
#incus#lxd#lxc