• 0 Posts
  • 330 Comments
Joined 3 years ago
cake
Cake day: June 11th, 2023

help-circle


  • dustyData@lemmy.worldtoMicroblog Memes@lemmy.worldMake it make sense
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    Lol, tell me you’ve never step inside a data center in your life without telling me.

    Just because the US dominated market is wasteful and destructive doesn’t mean it is like that everywhere. You buy a server today and the offerings will be the same CPUs that were available five years ago. Servers are mean, powerful beasts, and upgrades have been slow and incremental at best for almost two decades. While manufacturer guarantees might last 7 to 10 years, operators offer refurbishment and refreshment services with extended guarantees. A decade old server is not a rare sight in a data center, hell we even kept some old Windows servers from the XP area around. Also, mainframes are most definitely not legacy machinery. Modern and new mainframes are deployed even today. It is a particular mode and architecture quirk, but it is just another server at the end of the day. In fact, the z17 is an AI specialized mainframe that released just this year as a full stack ready made AI solution.

    A business that replaces servers every 3 years is burning money and any sane CFO would kick the CTO in the nuts who made such a stupid decision without a very strong reason to do it. Though C suites are not known for being sane, it is mostly in the US that such kind of wastefulness is found. All this is from experience on the corporate IT side, not at all hobbyist or second hand market.


  • dustyData@lemmy.worldtoMicroblog Memes@lemmy.worldMake it make sense
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    Comparing AI to the Saturn V is morally insulting. Sure, servers do have a lifespan. There’s a science to the upgrade rate, and it was probably 5 years…back in the 90s. When tech was new, evolving fast and e-waste wasn’t even a concept. Today, durability is measured in decades, which means provision is typically several decades.

    There are many servers with chips from the last 20 years that could be spun today and would still work perfectly fine. They were furbished with proper cooling and engineered to last. In 2020 I worked in a data center where they still had a 1999 mainframe in production, purring away like a kitten and not a single bit less performant. It just received more network storage and new ram memory from time to time. This is not what is happening with AI chips. They are being planned to burn out and become useless out of heat degradation.

    All based on the promise from NVIDIA of new chip’s design breakthroughs that still don’t exist for new models of LLMs that don’t exist either. The problem is that, essentially, LLM tech has reached a pause in performance. More hardware, more data, more tokens, are not solving the problems that AI companies thought they would, and there’s a development dead end where there’s very few easy or simple improvements left to make.


  • All those data centers will be of little use. The components used are deliberately e-waste, designed to die in 5 years or less. The rack space is the cheapest part and if there’s no demand, they will be quickly deprecating real state. Anything built will be demolished and sold as soon as the bubble bursts. That’s their usual destiny, as data centers are not a very profitable for lease space.



  • Simply because they can read the writing on the wall. Corporate made every single decision possible to signal “use AI or get fired.” With mass layoffs being driven mainly by whole industries pivoting to AI, people are fighting desperately to stay relevant. Every pundit and tech guru is singing “learn AI or become unemployable.” It is a strive for survival, not a heartfelt trust or belief on the tech. Hell, they might not even understand how it works, people just know they need it in their CV to keep their meager income coming.






  • The whole point of signals is because you don’t know when you haven’t noticed that someone is around. People who only use signals when someone can see them are doing it wrong as much as those who never use it. Cars have blind angles and you can’t be aware of everything going on around you all the time. That’s why you signal, so everyone has clear indication of your intents. You also reminded me of an uncle who used who always forgot to signal before a maneuver and when he realized mid-turn he would then use the signal. Like, it is of no use anymore.




  • dustyData@lemmy.worldtoScience Memes@mander.xyzChristmas Animals
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    8 days ago

    Not arguing here. But just want to point out that disability subculture usually arises as a survival response in the face of discrimination and segregation. Everyone has a need for community and a sense of belonging. When broad hegemonic culture rejects you and your presence, belonging is found in the one distinctive feature that is the cause for the rejection and the source of cohesion with your peers. See also gay subculture as a response to homophobia, US black culture as a response to racism, feminist sorority subculture in response to misogyny, etc. So it is not rare to see disability subculture as a response to ableism. These communities are very important for security and preservation of individuals. Just as everywhere else, security is always a trade-off with something else.




  • It’s a catch-22 situation. You are supposed to disclose if you wrote the thing you’re citing, but also cite in third person, and also it should be obfuscated for the peer review. So, what happens is that you write something like “in the author’s previous work (yourownname, 2017)…” then that gets censored by yourself or whoever is in charge of the peer review, “in (blank) previous work (blank)…”. Now, if you’re experienced in reviews you can probably guess it is the author of the paper you’re reviewing quoting themselves. But you still don’t know who it is, and you could never guess right whether it is Ruth Gotian or not. So you’re back to the tweet’s situation.


  • Well, first: this is not just one color. There are 4 or 5 different color blocks mixed in the picture. Which makes it hard to pinpoint a name for a single shade. Second: if you know anything about color theory, it is quite obvious that it’s any combination of red and green (or yellow and magenta). In color theory this combinations both can make anything from bright orange to yellow to grapefruit red. Or, if you greatly desaturate it or charge it towards black in hue, to brown. Everyone here is calling it some form of brown as well. And it might actually be browny (the color) by the overall range of values in the picture.

    As we all know, brown is just orange with context. Thus, the only technically accurate name it could be given is orange.