• 0 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Some good advice already in this thread.

    Also worth considering QEMU as an alternative to VirtualBox. The Virt-manager tool is decent way of managing machines, and it’s relatively straight forward to create a base machine if you’re duplicating it. Virtualbox is perhaps initially more user friendly for absolute beginners, but once you have any familiarity with virtualization I’d suggest QEMU offers much more.

    Also I find integration between the guest and the host linux system is generally more straight forward. Most linux systems already ship with samba and other relevant tools QEMU uses to interact between host and guest. There isn’t a need to faff around with the guest-additions stuff. Plus KVM virtual machines can run with near native performance.


  • If you want to play with Atomic distros I’d recommend you do that in a virtual machine in KVM first. They are quite restricting which is good for the distro developers to make consistent releases and experiences for users, and secure, but not necessarily the best option for tech savvy users.

    There are ways around the restrictions but you can reach points where the compromises you have to make are too frustrating. If you find that out late down the line after setting up your desktop it can be very annoying. Also I do use Flatpak, but it’s not the most efficient way to run software. Atomic distros have more overhead due to the need to use flatpaks or distrobox and the like to get everything you might want.

    Atomic distros are a neat idea but I personally love tweaking every element of my install and optimising or customising it. So I use a rolling release distro, have my home folder on a separate partition, and back up regularly.


  • Kubuntu 24.10 is on plasma 6.1; not sure why you thought it was on plasma 5? Maybe you were thinking of the Long Term Support release which has a much longer release cycle and favours stability over cutting edge; that probably is still on 5? But personally I stay away from Ubuntu distros due to snap.

    If you really want to learn Linux and game, maybe pick a distro that is not optimised by default for gaming and optimise it yourself?

    I’m on OpenSuSE Tunbleweed and have optimised it myself to game how I want. It’s rolling release so I’m on KDE Plasma 6.4. It’s not difficult to do although I haven’t gone quite as far as kernel patching that the gaming focused distros offer.

    Another challenge is Arch - it’s really not as difficult as people think and even just setting it up in a virtual machine helps you learn alot about Linux fundamentals without throwing the baby out with the bathwater. I’ve learnt alot using KVM to create virtual machines, and even have a Win 11 machine set up just because I can.

    Another route to consider which I also do is get a SBC like a Raspberry 5 and look into setting up self hosting of services like Home Assistant etc. Again you learn Alot about how Linux works in the process and you can keep your main PC running for games without having to move. There is a whole self hosting community on Lemmy with loads of different routes to go, and lots of different manufacturers these days.

    There are lots of options beyond changing distros. But also changing distros can be fun and a nice way to reset and make something new.


  • I have one of these, it’s a decent mini PC. It’s decently powerful - I used to play some steam games on it; a bit equivalent to steam deck or a bit more powerful. I used it for streaming on my home TV. I upgraded to a even better one as I liked it so much - and wanted to do more gaming.

    It’s a full PC basically. Whether it suits your purposes really depends on what you want to host? It could be overpowered and a bit redundant for a lot of self hosting uses.

    I have a Raspberry Pi 5 which is cheaper than this, and am hosting docker with Home Assistant, Sync thing, and fresh RSS running on it at the moment with plenty of spare memory and cpu resource.

    This mini PC is considerably more powerful and will have a higher power use at idle. You may struggle to use it at capacity so may be a bit wasteful?

    And even the rasp pi 5 is over powered and expensive for a lit of common home server users.

    So whether this PC is a good price and choice really depends on what you want to do with it. It’s at the end of the spectrum of being able to comfortably play 4k video. So it’d likely be a decent Jellyfin streaming host if that’s what you want?


  • Yeah I have a 3070 and have experienced similar sorts of minor annoyances when using Wayland. When I see reports that issues are fixed I try a Wayland session and still find various oddities or issues.

    They may be marginal useages but for me I have a dual screen set up and I might game on one and have a video open on another, or even have two video streams open, one on each screen. I find videos slow down and lag, or have artefacts. Issues I don’t get on X11 or when I was in windows.

    I’m in the same position of looking to upgrade my graphics card and I’m looking at AMD to avoid any more Nvidia related issues. I love using Linux but I don’t want to be dealing with Nvidia drivers after past experience.


  • I get what you’re saying but I don’t think it’s overblown having put up with issues myself with a mainstream 3070 card. A year really isn’t very long and it’s been a series of issues for me. When I’ve seen reports that the issues are fixed I have tried Wayland sessions and still find basic problems like video lag on my dual 4k set up without any clear solution. I have an Nvidia GPU and I avoid Wayland as a result.

    My feeling is that they’ve fixed the issues perhaps for most useage cases but not all, and it can be enough for just 1 unfixed issue to ruin your experience.

    I have a 3070 and am Linux only now; I’m currently looking at an upgrade for my GPU and genuinely I’m not even looking at an Nvidia GPU such have the annoyances been with Nvidia and wayland support. Many people who want specific features of Nvidia cards may not be so lucky

    Even if Wayland support is fixed, I’m in the category of once bite twice shy with Nvidia on linux.


  • I have a 3070 and generally I have no issues with gaming or working in X11.

    I have previously had major issues with Nvidia and Wayland and I don’t use Wayland as a result on that machine . Many of those issues may have been resolved now but at present there isn’t a need to be using Wayland although it is being increasingly pushed. Problems I had were laggyness in the desktop, and videos becoming choppy if I had more than one major process running on the GPU (eg game and video in browser, or two browser windows both with video). I believe such issues have been fixed in the past 12-18m but I’m now in the habit of using X11 on the machine with no incentive to try Wayland again for now.

    It is very easy to avoid Wayland - as simple as ensuring X11 is installed and then logging out of a Wayland desktop session and logging into an X11 session once and keeping with that as the default.

    I do have a separate AMD machine with integrated GPU and that has been running Wayland from the get go. On that machine I’ve never even had to think about this issue and have just let my fedora based distro (Nobara) default to Wayland. It’s been very much an Nvidia issue.


  • There is quite a range of devices out there now with varying capabilites. Things like the Onion Omega2+, Oranage Pi, and more.

    Raspberry Pi also remains good. While the Pi5 is expensive and more powerful - raspberry pi also makes the Pi Zero boards which are cheaper less capable boards which are closer to what the original raspberry Pi was but newer hardware.

    I’d say the Pi5 is a heading more towards a full PC like device (hence the comparisons to cost and capability minipcs pepple are making in thia thread). But there remain plenty of lower spec machines out there now similar to the original cheap Raspberry Pi concept. And we’ve had high inflation recently - to some extent the cost perception avtually reflects money being worth less than it was and buying less for $10 or $20.


  • Laptops are not generally designed to run like that with a closed lid. Heat dissipation is designed around the idea the laptop is open and some of it is through the keyboard surface. The lid closed would change that.

    Systems can of course be setup to power off the display but for server/service uses open laptops may not be efficient space wise.

    Having said that if the scenario is low power use the heat dissipation may not be a major issue. But if there is an unremovable battery i’d still be concerned about heat dissipation with the lid closed and even just the battery itself regardless of heat dissipiation.


  • Low power and arm architecture are big differentiators between Pi and laptops.

    I totally agree recycle laptops where possible, but they’re generally noisier and less energy efficient plus the battery degrades over time and is a fire risk.

    They’re not necessairly a good fit for always-on server or service type uses comparef to a small board like Raspberry Pi. But a cheap or free second hand laptop is definitely good for tweaking, testing and trying our projects.


  • Maybe I’m misunderstanding, but the commands would apply within the zsh, which is a bash alternative, not within the programmes running themselves?

    Or are you saying its sus because its illogical/confusing to have opposite uses for tgebsame shirt cut? I can see that as people using a terminal and launching vim would constantly be working against “muscle memory” each time they switch which would be annoying! Being familiar with keyboard shortcuts is what can make terminal based workflows so fast.


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlSudden emergency
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    23 days ago

    Do you have any drives/ partitions set up to load at boot other than the main linux partition?

    A common issue can be if linux is trying to mount a partition specified in fstab (the config file that lists all the hard drives to be loaded) and it can’t mount it, it will go into emergency mode. It does this as it assumes the drives are critically important and to prevent any damage to your system. You can mark non essential partitions as “nofail” in the fstab file so that linux continues to boot even if those partitions are unavailable.

    If you’d added a USB drive or another hard drive to auto mount at start up and its not available to linux then that might be the issue. Reinsert those drives and linux should boot. Alternatively you can login using emergency mode and edit the fstab file yourself if you know what youre doing. The offending drive can be removed/commented outthe fstab file or the nofail option added.

    If your linux install is set to mount your windows C: partition (for file access in linux for example) then its important to know the drive can be locked out by windows. Windows “fast startup” is a very common cause - it basically means windows doesnt shut down fully, it does a fake shutdown (hibernates), and doing this locks down the drive, preventing any access to the C: drive including in linux. If this scenario applies to you, boot into wndows, disable fast startup in the control panel and then try to reboot into linux.

    It this works it is still worth using the “nofail” option in fstab for any non essential drives. I personally dont auto mount my windows drive at all anymore; I have it visible in my file manager but manually mount it (just clikcing on it does it) when I need it.


  • Yeah true, but if you’re choosing Debian then I can see why there is caution about “unverified” flatpaks.

    Ultimately if they’re not verified then you’re taking it on trust that they’ve been repackaged by a good actor and not a bad actor. We have no reason to believe there are malicious flatpaks are on flathub and verified only really meansnit was packaged by the originating project itself. But it is still a separate chain of packaging and security from the official one in a distro.

    And Flathub doesnt need to be the repo used. Fedora for example created its own repo so it could verify its own flatpaks in the same way as its other system repos. Other distros do not seem to be following that path.

    Personally I take the risk on flatpaks in the same way I will take risks on the opensuse OBS (or AUR in arch) - if i need/want the software and it’s not in the main repos for my distro I will generally take it off flathub rather than add an OBS source I dont know well. (If its small software I might build from source myself).




  • Most people expect a GUI interface to get into their desktop. But you dont have to use one if you dont want. SDDM can log into any desktops you have - KDE but also Gnome or XFCE etc. It can also help select X11 vs Wayland sessions.

    There are alternatives like LightDM if you dont like SDDM. Or TTY is fine too. But generally they’re not large pieces of software and while they are undoubtedly bloated from what they could be, they are still small and lightweight in the era of Tbs of storage and Gb of memory. The savings you’d get in not using them would be small on the scale of the rest of the OS. Obviously they’re useless for none GUI machines / servers.

    They’re called display managers because historically the concept was added to X11 system where you’d have a stand alone X terminal running locally for the end user with an X server which would then connect to an X display manager on a central machine. This was in the Unix days and shared spaces like governments, universities or corporations and the set up was potentially less hardware intensive allowing cheaper X terminals and an expensive central server.

    The concept has gone now - PCs are vastly more poweful and can easily run the entire OS locally, and thin clients are the modern set up if you do want terminals/clients and central servers. The most common scenario is now the display manager running on your local PC, alongside everything else and essentially replicates the TTY login in a GUI form. So yes its basically a session manager but the name is historical and probably won’t be going anywhere fast.



  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlSecure Boot on or off with Mint?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    2 months ago

    If your linux OS supports secure boot then it does help improve security.

    The differing opinions on it are often because it can cause issues in some set ups and in a default set up its only a marginal security gain.

    It will add a layer of security at boot by preventing 3rd party unauthenticated processes / software from running and creates a secure boot chain from your BIOS up to the OS. But the default set up also means other authenticated OSes like Windows can be run, so its not as secure as it could be.

    To really secure it you could create your own keys and then only your OS could boot. But as a linux newbie thats likely way more than you need and there are risks if you fuck up, to the point of accidentally locking you out of your own machine

    So your choice is really just the default set up being on or off. On is a bit more secure but if you experience any issues then turn it off and don’t worry about it.


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlSecure Boot on or off with Mint?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    2 months ago

    Its not doing nothing. Linux uses a Microsoft provided key for initial BIOS authentication and then has its own tree of keys that it uses for security. So it does have the benefits of locking out malicious code/processes even in a default set up.

    Using your own secure boot and TPM keys is certainly more secure, but it doesnt follow that secure boot with the default set up is doing nothing to help secure your system at boot.


  • BananaTrifleViolin@lemmy.worldtoLinux@lemmy.mlSecure Boot on or off with Mint?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    2 months ago

    Linux supports secure boot so if a distro supports it it’s worth using it.

    Linux can use a key signed by Microsoft in a preboot loader and then itself perform its own key authentications for all other processes and software (a shim), forming a secure chain from the BIOS up during boot. You dont have to play with creating your own keys.

    So if your OS supports secure boot it is worth using it for added security at boot. Its far from perfect in this set up (as there are plenty of windows OS that also have permission to boot) but it is better than a free for all without it even if the risk is low for most desktop users.

    You can go further and generate your own keys and use secure boot and TPM together to lock down the system further but you dont have to to get some benefits from secure boot.