• 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle


  • No, just because it is reproducible doesn’t mean you are able to (re)produce something that works. With something like fedora silverblue you know that this specific composition of packages and their versions has been tested, and that all the other users run this exact composition as well.

    When you roll your own composition, where you install whatever stuff, you may be the one finding out that there’s some conflict between package a version u.v.w and package b version x.y.z.


  • I encourage you to go to town with whatever crazy setup you come up.

    I just want to note that the reboot-to-update mechanism also has its positive sides, as ancient as it may seem (we do not succumb to windows level backwardness, because that fails to reap the benefits despite requiring so many reboots). Namely, you get atomic updates, hence the name “fedora atomic” for example. That means you have no transient periods where your OS is running in an inconsistent state. Like when you update a traditional distro, the new files/libraries/binaries/kernel-modules do not match anymore what is in RAM, including the currently running kernel. That leads to stuff like the nvidia driver / cuda not working until reboot, running applications failing to load a library they need now etc… The vast majority of times this is no huge problem, but in theory the only way of maintaining a system with it never running in basically undefined state is with atomic udpates.



  • My Linux journey started when Ubuntu was in its single digit versions. I don’t remember the exact version I used first, but it was >15 years ago.

    Of course I had a long distro hopping phase, that got finally ended by Arch. Because Arch breaks less, at least if you don’t molest it. Upgrades of versioned distros always had hickups or problems, and I grew tired of having to do a larger troubleshoot session once or twice a year. Arch has only very minor hiccups once in a while, and they’re typically always the same. 99% when the update doesn’t run through the keyring changed and you have to update it first, .9% is a bug with like a new release of the DE or something that gets fixed upstream in a couple days. And .1% is you have to look at the news because some manual intervention is required, like removing a package and going for something else or whatever. That is when you keep your system free of cruft and go with a popular DE.

    Just 1.5 years ago I finally left Arch after a loong time. For something that is very new and different: fedora atomic (silverblue). Technology wise it is superior in my mind, and in my last years of using Arch I had most things in Flatpaks and containers anyways. But if you want a classical distro, Arch is definitely amongst the very well working ones.


  • The more packages you install rpm-ostree, the likelier your system will break. You effectively turn back to a traditional distro that relies on a package manager, so all the things that can go wrong with a package manager are bound to go wrong. The whole point of fedora atomic is to offload the OS composition (so all the complicated packages stuff) higher up the chain. So that not everyone mixes up their own combination of packages installed, but instead you get a (semi-) fixed combination of packages that has been tested to work already before it lands on your computer.

    The uBlue images are just different package combinations - but still you’re not the only one rocking the packages combination of bazzite for example, so it is rather unlikely you’ll run into a problem that only you and nobody else has.

    This to me is also what sets fedora atomic apart from Suse MicroOS for example. With MicroOS you still have a package manager messing about with the system, and once it makes a mistake that gets buried in your system forever, except if you notice, roll back and fix it. As where with fedora atomic the mechanism how your system layout comes to your computer is similar to how git works (ostree) or images (like docker, which is what ublue ships). So if there’s a mistake in how your system is layed out, the next time you rebase/update you are guaranteed to end up with a the intended system layout.


  • Hm well, I caried a Yoga l390 in a Backpack for 3.5 years and opened+closed it many times a day. That thing is now 5 years old. It’s not being used daily anymore, but still multiple times a week. And it still works perfectly in every regard. Only the hinges became a bit less stiff and the battery capacity went down a bit. But those are a given with that age and amount of charge cycles.

    Since 1.5 years I have the pleasure to work fulltime with a fully specced x1 Yoga, that also has to go into the backpack every day. Of course that’s not very old, but it also has zero problems, only the silver paint at the corners started to wear off slowly from carrying it around.

    The stylus that stows in the case is annoyingly small (and you need a seperate normal sized one for extended writing), but other than that it has all been very positive for me.





  • skilltheamps@feddit.detoLinux@lemmy.mlUbuntu Snap Hate
    link
    fedilink
    arrow-up
    36
    arrow-down
    2
    ·
    2 months ago

    Research what happened to Upstart, Mir or Unity. It won’t take long until snap becomes one of them. Somebody at canonical seems to desperately obsess over having something unique, either as a way to justify canonicals existance or even in the hopes of making the next big thing. Over all these years they never learned that whatever they do exclusively will always fall short of any other joint efforts in the linux world, because they always lack the technical advances, ability/will to push it for a prolonged time and/or the non-proprietary-ness. So instead of collaborating like every serious linux vendor, they’re polluting their distro with half-assed, ever changing and unwanted experiments. They’re even hijacking apt commands to push their stupid snap stuff against the users intent. With the shengians they’re pulling Ubuntu cannot be relied on, and with that they’re sabotaging their own success and drive away any commercial customers that generate revenue.



  • Specifically the shitty IPU6 situation is on Intel, and is invariant to any laptop manufacturers. I also have a Thinkpad X1 with that issue. So for that the situation that one manufacturer would support it properly (i.e. upstream) and others don’t can’t exist, as soon as anybody puts it upstream it works for everybody. Thankfully there’s some progress (search for libcamera) and in the not too distant future it should work ootb. For fingerprint readers it is a different story though, as there are many different ones, so that one is on Dell indeed



  • You have this view because your hardware is from an era where fingerprint reader largely weren’t a thing and webcams were connected via internal usb. The issue is not that the Linux kernel drops anything (between you and op, you’re the one with the old hardware). The issue is, that fingerprint readers became a commodity without ever gaining universal driver support, and shengians like Intel pushing its stupid IPU6 webcam stuff without paving the way upstream beforehand


  • skilltheamps@feddit.detoAndroid@lemmy.worldLooking for a Python Interpreter
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Well it is compiled to byte code in a first step, and this byte code then gets processed by the interpreter. Now Java does the exact same thing: gets compiled to byte code which then gets executed by the jvm (java virtual machine), which is essentially a interpreter that is just a little simpler than the python one (has fewer types for example). And yet, nobody talks about a java interpreter


  • Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there’s no need to compose the OS and configurations over and over again for every machine.

    Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.

    Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.


  • /dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn’t provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).


  • Well maybe you youself are too new to recognize some of the appeals ;)

    One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie’s image registry, where the target machines will fetch it from automatically and rebase to it.