• 0 Posts
  • 10 Comments
Joined 3 months ago
cake
Cake day: December 6th, 2024

help-circle

  • Acting techniques improved massively during the XXth century, so stuff that relies on that (basically anything but slapstick Comedy and mindless Action) will feel less believable, which impacts mostly things from the 60s and earlier.

    Then there are the Production values: the scenarios in early XXth century films were basically Theatre stages whilst more recent stuff can be incredibly realistic (pay attention to the details in things like clothing and the objects and furniture in indoor scenes in period movies) and Sci-Fi benefited massive from the early XXIst century techniques for physically correct 3D rendering and Mocap techniques so there is a disjunction in perceived realism between even the early Star Wars Movies and something like The Mandalorian.


  • Method Acting (which is a pretty powerful Acting technique for getting actors to genuinely feel the emotions of the character) dates back to the 60s in Movies (it dates back even longer to Stanislavski in 19th century Russia, but its popularity really took off mid 20th century) so before that actors were just faking it whilst after that it will be more and more them reacting genuinely to imaginary circumstances (in terms of the audience it means we will actually empathise with what’s hapenning to the character because the emotions on display are genuine).

    So the quality of the acting in the kind of Films that are now coming into the Public Domain will be lower than what we are use to (though in stuff like Comedy and certain kinds of Action it’s seldom noticeable).

    And this is before we even go into the quality of the Production (in audience terms, how believable are the scenarios).

    I doubt Hollywood will be threatened by this for at leat a couple of decades.







  • Look for a processor for the same socket that supports more RAM and make sure the Motherboard can handle it - maybe you’re lucky and it’s not a limit of that architecture.

    If that won’t work, breakup your self-hosting needs into multiple machines and add another second hand or cheap machine to the pile.

    I’ve worked in designing computer systems to handle tons of data and requests and often the only reasonable solution is to break up the load and throw more machines at it (for example, when serving millions of requests on a website, just put a load balancer in front of it that assigns user sessions and associated requests to multiple machines, so the load balancer pretty much just routes request by user session whilst the heavy processing stuff is done by multiple machines in such a way the you can just expand the whole thing by adding more machines).

    In a self-hosting scenario I suspect you’ll have a lot of margin for expansion by splitting services into multiple hosts and using stuff like network shared drives in the background for shared data, before you have to fully upgrade a host machine because you hit that architecture’s maximum memory.

    Granted, if a single service whose load can’t be broken down so that you can run it as a cluster, needs more memory than you can put in any of your machines, then you’re stuck having to get a new machine, but even then by splitting services you can get a machine with a newer architecture that can handle more memory but is still cheap (such as a cheap mini-PC) and just move that memory-heavy service to it whilst leaving CPU intensive services in the old but more powerful machine.