The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
I don’t want to speak to your specific use case, as it’s outside of my wheelhouse. My main point was that SATA cards are a problem.
As for LSi SAS cards, there’s a lot of details that probably don’t (but could) matter to you. PCIe generation, connectors, lanes, etc. There are threads on various other homelab forums, truenas, unraid, etc. Some models (like the 9212-4i4e, meaning it has 4 internal and 4 external lanes) have native SATA ports that are convenient, but most will have a SAS connector or two. You’d need a matching (forward) breakout cable to connect to SATA. Note that there are several common connectors, with internal and external versions of each.
You can use the external connectors (e.g. SFF-8088) as long as you have a matching (e.g. SFF-8088 SAS-SATA) breakout cable, and are willing to route the cable accordingly. Internal connectors are simpler, but might be in lower supply.
If you just need a simple controller card to handle a few drives without major speed concerns, and it will not be the boot drive, here are the things you need to watch for:
Also, make sure you can point a fan at it. They’re designed for rackmount server chassis, so desktop-style cases don’t usually have the airflow needed.
To anyone reading, do NOT get a PCIe SATA card. Everything on the market is absolute crap that will make your life miserable.
Instead, get a used PCIe SAS card, preferably based on LSi. These should run about $50, and you may (depending on the model) need a $20 cable to connect it to SATA devices.
I did this back in the days of Smoothwall, ~20 years ago. I used an old, dedicated PC, with 2 PCI NICs.
It was complicated, and took a long time to setup properly. It was loud and used a lot of power, and didn’t give me much beyond the standard $50 routers of the day (and is easily eclipsed by the standard $80 routers of today). But it ran reliably for a number of years without any interaction.
I also didn’t learn anything useful that I could ever apply to something else, so ended up just being a waste of time. 2/10, spend your time on something more useful.
It won’t officially work, but it’s not too hard to get it going. I just moved a similar box to 24H2 LTSC.
OP, you’ll probably need to run “setup.exe /product server”, or follow a recent guide. You’ll also need to do this for every major upgrade (i.e. yearly)
I agree though with the plan to use this as a test ground. I also recently upgraded a Lubuntu system to similar specs, and it runs pretty smoothly. But learning Linux takes a lot of time they don’t have.
The big caveat is that the BIOS must allow it, and most released versions do not.
What is your use case? I ask because ESXi is free again, but it’s probably not a useful skill to learn these days. At least not as much as the competition.
Similarly, 2.5" mechanical drives only make sense for certain use cases. Otherwise I’d get SSDS or a 3.5" DAS.
Thinkpads are extremely well documented. For how to repair/replace parts, you need the HMM. Just Google for “Thinkpad t14 Gen 1 HMM” and you should find the official PDF on their site. That will tell you, step by step, how to replace the keyboard.
As for the part itself, you can again check Lenovo’s site for all compatible parts (FRUs) and find the item number and details. While I wouldn’t recommend buying directly from them due to cost, this should give you the information needed to find it elsewhere. eBay has tons of Thinkpads being sold for parts, and many of these will be parted out. You should have no issues finding what you’re looking for.
They all have to work (at least to an extent) using only x1. It’s part of the PCIe spec.
Missing pins are actually extremely common. If your board has a slot that’s x16 (electrically x8), which is very common for a second video card, take a closer look. Half the pins in the slot aren’t connected. It has the full slot to make you feel better about it, and it provides some mounting stability, but it’s electrically the same as an x8 that’s open.
USB the protocol, or just uses a USB cable? If it’s not using the protocol, the cables are a cheap way of getting cables of a certain spec.
The odds of you getting your own IPv4 address are pretty low, unless seedboxes have very different rules than I’d expect. Presumably you have a shared IP behind NAT, and they forward ports (not unlike a good VPN)
In that case, they would need the IP address, incoming port, and a very specific timestamp.
To continue with this, there is currently a major court battle to identify users that posted on a piracy subreddit, and there will be more, similar cases.
OP has been very vague about what you’re trying to do, which is good - Nintendo, Adobe, Sony, etc can’t reasonably claim to be victims to get a court order. But if there are other links to them, including other posts, there could be orders to unmask in order to show intent.
Also, be sure to run extensive burn in tests before deploying for production use. I had an entire batch from GoHardDrive fail on me during that testing, so my data was never in danger.
Buggy how? What specifically is an issue? Have you ever gotten to a stable and working point? If so, what changed?
I personally only use Linux in servers. It may take a while to configure initially, but then I don’t touch it in any meaningful way for years.
Thank you for the extra context. It’s relieving to know you don’t just have a bunch of USB “backup” drives connected.
To break this down to its simplest elements, you basically have a bunch of small DASes connected to a USB host controller. The rest could be achieved using another interface, such as SATA, SAS, or others. USB has certain compromises that you really don’t want happening to a member of a RAID, which is why you’re getting warnings from people about data loss. SATA/SAS don’t have this issue.
You should not have to replace the cable ever, especially if it does not move. Combined with the counterfeit card, it sounds like you had a bad parts supplier. But yes, parts can sometimes fail, and replacements on SAS are inconvenient. You also (probably) have to find a way to cool the card, which might be an ugly solution.
I eventually went with a proper server DAS (EMC ktn-stl3, IIRC), connected via external SAS cable. It works like a charm, although it is extremely loud and sucks down 250w @ idle. I don’t blame anyone for refusing this as a solution.
I wrote, rewrote, and eventually deleted large sections of this response as I thought through it. It really seems like your main reason for going USB is that specific enclosure. There should really be an equivalent with SAS/SATA connectors, but I can’t find one. DAS enclosures pretty much suck, and cooling is a big part of it.
So, when it all comes down to it, you would need a DAS with good, quiet airflow, and SATA connectors. Presumably this enclosure would also need to be self-powered. It would need either 4 bays to match what you have, or 16 to cover everything you would need. This is a simple idea, and all of the pieces already exist in other products.
But I’ve never seen it all combined. It seems the data hoarder community jumps from internal bays (I’ve seen up to 15 in a reasonable consumer config) straight to rackmount server gear.
Your setup isn’t terrible, but it isn’t what it could/should be. All things being equal, you really should switch the drives over to SATA/SAS. But that depends on finding a good DAS first. If you ever find one, I’d be thrilled to switch to it as well.
You currently have 16 disks connected via USB, in a ZFS array?
I highly recommend reimagining your path forward. Define your needs (sounds like a high-capacity storage server to me), define your constraints (e.g. cost), then develop a solution to best meet them.
Even if you are trying to build one on the cheap with a high Wife Acceptance Factor, there are better ways to do so than attaching 16+ USB disks to a thin client.
I think you’re massively downplaying how much of a hit this will be.
Let’s say you make $100k/year. Think about the lifestyle it allows. You’ve just been informed that it’s now going part time, and you’ll only be making $15k/year. How far does that get you?
Now, you’re expecting someone else to pay for that advertising spot, so it won’t be that bad. But who is even eligible? Microsoft’s Bing is the obvious answer, and probably DDG. The rest of the default search engines aren’t even general web searches.
Do you really think that either of them are going to pay any significant amount to be the default? Especially when most people are going to change it back to Google anyway, since these are automatically people willing to change to a different browser?
Sure, they might be willing to pay something. But it won’t be anything close to what they had before.
Kind of. They will be multiples of 4. Let’s say you got a gigantic 8i8e card, albeit unlikely. That would (probably) have 2 internal and 2 external SAS connectors. Your standard breakout cables will split each one into 4 SATA cables (up to 16 SATA ports if you used all 4 SAS ports and breakout cables), each running at full (SAS) speed.
But what if you were running an enterprise file server with a hundred drives, as many of these once were? You can’t cram dozens of these cards into a server, there aren’t enough PCIe slots/lanes. Well, there are SAS expansion cards, which basically act as a splitter. They will share those 4 lanes, potentially creating a bottleneck. But this is where SAS and SATA speeds differ- these are SAS lanes, which are (probably) double what SATA can do. So with expanders, you could attach 8 SATA drives to every 4 SAS lanes and still run at full speed. And if you need capacity more than speed, expanders allow you to split those 4 lanes to 24 drives. These are typically built into the drive backplane/DAS.
As for the fan, just about anything will do. The chip/heatsink gets hot, but is limited to the ~75 watts provided by the PCIe bus. I just have an old 80 or 90mm fan pointing at it.