cross-posted from: https://beehaw.org/post/24650125

Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.

Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.

Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 hours ago

    This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 hours ago

      At some point a drive can get TOO big

      I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        5 hours ago

        This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they’re not worried about being wasteful.

        • thejml@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.

          • rtxn@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            4 hours ago

            Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.

            • thejml@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.

      SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.

    • pHr34kY@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      I don’t get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001… Onto a 4TB drive.

      …and even then, two thirds of it is just pirated movies.

      • billwashere@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 hours ago

        Amateur 😀

        But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I need a home for my orphaned podman containers /s

        I think this is better targeted to small and medium businesses.

        if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking.