

unprivileged programs have limited/no ability to do scary things to your computer. they might be able to read some data, but it’s not going to implant malware in the boot sequence for Windows.
No, but they can still severely harm your computer/data. Unprivileged programs can still delete or encrypt everything in your home directory or inject themselves into other unprivileged programs or a commonly used shortcut file. You’re probably thinking of containerized apps which are much more limited than the default user permissions and access can be given only to what is needed instead of everything your user has access to.
Linux is as susceptible to this as Windows. It’s not that hard to write a proof of concept malware in Python that copies itself to somewhere in your home directory and appends python ~/.some-boring-config-directory-most-people-never-open/some/more/subdirectories/for/obfuscation/persist.py to your bashrc. You can do the same on Windows with Powershell, JScript, or even VBS, all of which can do severe damage even without privilege escalation.
For example, there was that fake captcha scam a while back which social engineered people into pasting Powershell scripts into their run window, and is able to persist even without UAC permissions. A well known equivalent attack on Linux are those bash shell tutorial sites with the handy copy button next to the listed commands, which can control what is pasted into your clipboard and might not actually give you the command it appears to give you. Even on a user space bash terminal, something like rm -rf /* can delete all your data, ironically not the system and application files that can be replaced (since those require root) but it can delete your personal files that you actually care about just fine. They can also persist on your system by appending stuff into your bashrc with >, because that file is owned by your user and therefore doesn’t need extra permissions.
Any modern operating system is so complex and has so many parts interacting with each other that it’s always possible to hide something malicious somewhere in the Rube Goldberg machine which most people will never notice. Real malware don’t use the typical persistence methods normal programs do because they are well documented and easy to defend against. Linux can be said to be better than Windows in this regard due to being open source and auditable and therefore doesn’t have nearly as many undocumented hiding places (and Linux is generally less Rube Goldbergy), but it is definitely not immune. Never ever run any untrusted program or script, not even unprivileged. The biggest thing Linux has over Windows in this regard is the package manager, which is actively moderated by your distro maintainers, so you don’t have to download random installers from the internet like on Windows.
Yes and no. A secure password is extremely important against some security threats, but completely useless against others. It’s like vitamin C. If you don’t get enough, that’s a massive problem and opens you up to a ton of serious issues, same as if you don’t have enough complexity in your password. But even if you do, it won’t effectively protect you from, say, cancer or unprivileged malware respectively.
There’s nothing stopping any program from attempting to bruteforce your Linux password, literally running through possibilities hoping to guess it. Modern password implementations usually have some form of bruteforce protection. If you’ve ever entered your password wrong in sudo or KDE’s lock screen, it usually hangs for a few seconds before telling you your password is wrong, even though any modern computer will have determined it was wrong in literally an instant. This is to prevent a malicious program from endlessly trying random guesses until it gets it by making the time that would take to guess a sufficiently unique password too long to be practical. Your phone and optional software available for Linux go a step further, imposing longer and longer delays with each subsequent failed password attempt, and also prevents malicious programs from spawning many threads each independently calling sudo to bruteforce in parallel by completely disabling access until the time penalty elapses. Though you absolutely do need a sufficiently secure password and a anti-bruteforce delay, making it overly long has diminishing returns past a certain point, it doesn’t matter how many thousands of years it would take to bruteforce, but the upgrade from thousands of seconds with a simple password like “hunter2” to years is the important part.
Also, a password is like a padlock on a wooden box. Even if they don’t have the key, they can still just cut the box open. In computer terms this would be if someone accessed the files in your SSD directly and injected malware with root privileges, both completely bypass the check that’s “normally” supposed to stop unauthorized users. Encryption can help but like you said, physical access is generally considered game over anyway unless they found your computer while it’s off and it is never returned go you for you to enter your password. A computer with encrypted everything wouldn’t boot. Your EFI partition and especially your firmware have to be unencrypted, and anything unencrypted can be tampered with by a sufficiently skilled attacker with physical access to add things like keyloggers and backdoors that sit dormant until you graciously decrypt everything for them.
More or less as far as I know, provided you don’t have any other way of remote access (VNC, RDP, Anydesk/Teamviewer and similar, that weird Steam remote desktop app, a server running vulnerable software on an open port that can be hijacked, etc). In computing, the general rule to follow is if you don’t need it, don’t enable it, otherwise it’s ripe for abuse. Though that being said, your router should be configured to block local port access from the internet anyway, but if you have another infected device on your network, that’s a major threat. If you do want SSH, configure it to only accept the keys of your trusted devices and not just respond with a password prompt to any device that comes knocking.
“Trust” in computing is fickle and complicated, just like real life. At the end of the day, you have to make a decison on who and what you personally trust. An iPad or Chromebook would be the least trustworthy computers in my mind because it’s locked down and administered by companies I absolutely do not trust, and though the locked down architecture does prevent other malware from infecting it, there’s probably already malware by any other name on it with proper Google or Apple security signatures that came with the device from the factory.
This is the same as if your distro maintainer is untrustworthy. They could slip in malware into the official package manager and you’d never know. I personally trust a reputable Linux distro over the literal biggest tech corporations in the world, but I’m still putting my faith in an organization I do not control nor personally know the people in control.
Open source is more trustworthy than proprietary software because the source code is available, but even that isn’t completely guaranteed to stop malicious code from making it in. The recent xz backdoor comes to mind. You’re still trusting that the people looking at the source code actually catch the malicious part, and that’s not guaranteed when everyone working on it are overworked and stressed like software developers tend to be, and even when that happens, it might be months of years down the line after the damage has already been done. There’s a reason a full security audit of an app can cost anywhere from thousands to millions of dollars depending on how big the codebase is. Also, because the vast majority of software aren’t compiled in a reproducible way, you don’t really have a guarantee that the actual binary executable that’s on your computer exactly marches the source code unless you go through the (usually difficult and frustrating) process of actually compiling it yourself. Sure, you can probably assume that the official release by the source code authors and signed with their cryptographic keys matches the source code since both come from the same place, but that’s not guaranteed and you’re still trusting a person or organization.
But wait, there’s more! The compiler you use is itself a program that needed to be compiled by another compiler, and so on and so fourth until you literally reach the stage decades back where someone manually wrote the individual bits for the very first compiler in that chain. A malicious compiler can be made to obfuscate the fact that it’s malicious, and only a manual review and reverse engineering of the raw binary (without reverse engineering software, mind you) can prove or disprove it’s compromised.
Finally, there’s hardware. Even if you audit every single bit of software, the process itself has immense complexity that you can’t audit without, 1, extremely expensive scientific equipment, and 2, destroying it in the process, and that’s only one chip out of the tens of chips in a computer. Your processor could have secret instructions that bypass all security and your only real hope is to bruteforce every possible input to see what happens. And proving existence is much easier than proving absence.
I’m not trying to scare you, but I do want to illustrate just how hard it is to have absolute trust in any computer. At the end of the day, you can never have a computer you completely trust unless you manually assembeled it from raw materials (not aided by any existing computer) and hand wrote every bit that goes into it. Like I said, we all need to make a decision to have faith in some person or organization we do not know. You can spend every waking minute auditing every last part of your computer, hardware and software, but then you wouldn’t have time to actually use it for the things you want to do. There’s no solution to this, there’s only higher and lower degrees of trust and security, which only you can determine for yourself.
So no, no one operates that way, because it’s impossible.