With the latest release of android it now supports some Linux functionality.
Wait, it does? Gonna have to check that out.
With the latest release of android it now supports some Linux functionality.
Wait, it does? Gonna have to check that out.
You are missing the Muse for your Poet. :D
If getting rid of Microsoft entirely is the goal, Samba does AD with GPOs just fine.
I heard Ubuntu got some big upgrades starting with 22.04 in terms to support for GPOs.
I never tested it personally but they do have some documentation for it and they can be added to a Windows domain: https://documentation.ubuntu.com/adsys/en/latest/
What? X11 has zero HDR support.
I’m curious. Say you are getting a new computer, put Debian on, want to run e.g. DeepSeek via ollama via a container (e.g. Docker or podman) and also play, how easy or difficult is it?
On the host system, you don’t need to do anything. AMDGPU and Mesa are included on most distros.
For LLMs you can go the easy route and just install the Alpaca flatpak and the AMD addon. It will work out of the box and uses ollama in the background.
If you need a Docker container for it: AMD provides the handy rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete
images. They contain all the required ROCm dependencies and runtimes and you can just install your stuff ontop of it.
As for GPU passthrough, all you need to do is add a device link for /dev/kfd
and /dev/dri
and you are set. For example, in a docker-compose.yml you just add this:
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
For example, this is the entire Dockerfile needed to build ComfyUI from scratch with ROCm. The user/group commands are only needed to get the container groups to align with my Fedora host system.
ARG UBUNTU_VERSION=24.04
ARG ROCM_VERSION=6.3
ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete
# For 6000 series
#ARG ROCM_DOCKER_ARCH=gfx1030
# For 7000 series
ARG ROCM_DOCKER_ARCH=gfx1100
FROM ${BASE_ROCM_DEV_CONTAINER}
RUN apt-get update && apt-get install -y git python-is-python3 && rm -rf /var/lib/apt/lists/*
RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.3 --break-system-packages
# Change group IDs to match Fedora
RUN groupmod -g 1337 irc && groupmod -g 105 render && groupmod -g 39 video
# Rename user on newer 24.04 release and add to video/render group
RUN usermod -l ai ubuntu && \
usermod -d /home/ai -m ai && \
usermod -a -G video ai && \
usermod -a -G render ai
USER ai
WORKDIR /app
ENV PATH="/home/ai/.local/bin:${PATH}"
RUN git clone https://github.com/comfyanonymous/ComfyUI .
RUN pip install -r requirements.txt --break-system-packages
COPY start.sh /start.sh
CMD /start.sh
However, for some reason on PC it’s often quirky (Windows or Linux). My PC bluetooth works through a dongle so I wonder if an integrated card would do better.
Is it an USB dongle?
If so, make sure to add a short USB-A to USB-A cable between your PC and the dongle. Interference is a serious issue on USB 2.4 GHz wireless dongles when directly connected to a mainboard.
That used to be the case, yes.
Alpaca pretty much allows running LLM out of the box on AMD after installing the ROCm addon in Discover/Software. LM Studio also works perfectly.
Image generation is a little bit more complicated. ComfyUI supports AMD when all ROCm dependencies are installed and the PyTorch version is swapped for the AMD version.
However, ComfyUI provides no builds for Linux or AMD right now and you have to build it yourself. I currently use a simple Docker container for ComfyUI which just takes the AMD ROCm image and installs ComfyUI ontop.
If it’s just about self-hosting and not training, ROCm works perfectly fine for that. I self-host DeepSeek R1 32b and FLUX.1-dev on my 7900 XTX.
You even get more VRAM for cheaper.
deleted by creator
It has been a while since I reinstalled Fedora KDE but I don’t think it swaps mesa/ffmpeg/gstreamer to the freeworld version automatically, it just enables the repository for it.
VAAPI works on the integrated GPUs as well. There’s a table of supported codecs here: https://wiki.archlinux.org/title/Hardware_video_acceleration#Comparison_tables
Unfortunately they never bothered to get things integrated into Mesa and they have 2 different packages.
Fedora’s repos lack H264 support for AMD out of the box though.
I run the 32b one on my 7900 XTX in Alpaca https://jeffser.com/alpaca/
There is no way to fit the full model in any single AMD or Nvidia GPU in existence.
I regularily program Arduinos in Arduino IDE v2 (https://flathub.org/apps/cc.arduino.IDE2) and ESPs via the ESPHome web flasher and the esphome CLI tool.
Works flawlessly once you added yourself to the dialout group as mentioned by @StorageB@lemmy.one.
essentially our first communication is done with some central server
No, the first communication is made with your DNS server to fetch the key for encryption from an HTTPS record. If a record with key is found it is used to encrypt the Client Hello, otherwise it falls back to the unencrypted variant.
Cloudflare is not involved, unless you are hosting your domain through Cloudflare of course.
I am unfamiliar with QUIC, and quick search basically tells it is kinda like multilane highway for udp.
QUIC is primarily used for HTTP/3. The protocol was engineered and proposed by Google, same as with ECH and Cloudflare.
ECH is intended for privacy, not for circumventing censorship.
If the next TLS version enforces ECH, plaintext SNI will die out at some point on its own.
In what sense? ECH does not rely on Cloudflare anymore than QUIC relies on Google.
I’m not reading through that entire rant but 2 things I noticed with mouse input on Wayland:
On KDE, the mouse acceleration is horrible by default. However, setting “Pointer acceleration” to “None” in the mouse configuration solves pretty much all my mouse input issues on Wayland.
Also, I noticed that there is quite a difference between default polling rates on wireless mice vs wired mice. When connecting my Logitech Pro X wirelessly I get a 1000 Hz polling rate but if I connect it wired, the polling rate falls back to 250.
Dope, seems to not have landed yet in LineageOS but the Terminal app is already installed. Just missing the toggle in the developer options.