Which country’s?
It is awfully priviledged and insulting to imply such horrible things and wish harm on others because of your xenophobia and limited experience with diverse contexts.
Which country’s?
It is awfully priviledged and insulting to imply such horrible things and wish harm on others because of your xenophobia and limited experience with diverse contexts.
Lol, tell me you’ve never step inside a data center in your life without telling me.
Just because the US dominated market is wasteful and destructive doesn’t mean it is like that everywhere. You buy a server today and the offerings will be the same CPUs that were available five years ago. Servers are mean, powerful beasts, and upgrades have been slow and incremental at best for almost two decades. While manufacturer guarantees might last 7 to 10 years, operators offer refurbishment and refreshment services with extended guarantees. A decade old server is not a rare sight in a data center, hell we even kept some old Windows servers from the XP area around. Also, mainframes are most definitely not legacy machinery. Modern and new mainframes are deployed even today. It is a particular mode and architecture quirk, but it is just another server at the end of the day. In fact, the z17 is an AI specialized mainframe that released just this year as a full stack ready made AI solution.
A business that replaces servers every 3 years is burning money and any sane CFO would kick the CTO in the nuts who made such a stupid decision without a very strong reason to do it. Though C suites are not known for being sane, it is mostly in the US that such kind of wastefulness is found. All this is from experience on the corporate IT side, not at all hobbyist or second hand market.
Comparing AI to the Saturn V is morally insulting. Sure, servers do have a lifespan. There’s a science to the upgrade rate, and it was probably 5 years…back in the 90s. When tech was new, evolving fast and e-waste wasn’t even a concept. Today, durability is measured in decades, which means provision is typically several decades.
There are many servers with chips from the last 20 years that could be spun today and would still work perfectly fine. They were furbished with proper cooling and engineered to last. In 2020 I worked in a data center where they still had a 1999 mainframe in production, purring away like a kitten and not a single bit less performant. It just received more network storage and new ram memory from time to time. This is not what is happening with AI chips. They are being planned to burn out and become useless out of heat degradation.
All based on the promise from NVIDIA of new chip’s design breakthroughs that still don’t exist for new models of LLMs that don’t exist either. The problem is that, essentially, LLM tech has reached a pause in performance. More hardware, more data, more tokens, are not solving the problems that AI companies thought they would, and there’s a development dead end where there’s very few easy or simple improvements left to make.
All those data centers will be of little use. The components used are deliberately e-waste, designed to die in 5 years or less. The rack space is the cheapest part and if there’s no demand, they will be quickly deprecating real state. Anything built will be demolished and sold as soon as the bubble bursts. That’s their usual destiny, as data centers are not a very profitable for lease space.
That’s why they’re making it expendable. Those chips are designed to survive no more than 5 years of service in data centers. An unprecedented low level of durability provisioning. They are intentionally making e-waste.
Simply because they can read the writing on the wall. Corporate made every single decision possible to signal “use AI or get fired.” With mass layoffs being driven mainly by whole industries pivoting to AI, people are fighting desperately to stay relevant. Every pundit and tech guru is singing “learn AI or become unemployable.” It is a strive for survival, not a heartfelt trust or belief on the tech. Hell, they might not even understand how it works, people just know they need it in their CV to keep their meager income coming.


Yeah, I would say that magic spells, in English and other languages, are more traditionally associated with rhymes than specific words. Latin associated to magic is through catholic ritualist use of Latin. Even then, it was more about repeating prayer phrases, like in stereotypical exorcism or funeral rites. Gothic novels, for example, straight up used catholic prayer in Latin to convey magical intent. But it was not vaudeville magic or modern day superpower magic like in pop culture.


Synology offers cloud services and business level support for their enterprise products. They do support different authentication workflows, they are just not all included with the consumer products.


Never conflate loneliness with not getting laid. Thereby lies the first in a long streak of mistakes.


extreme disconnect between researchers/academic writing and how the general populace interprets the word
This is the bane of sciences communication. No, the way I’m using the word is not the same you use and therefore your interpretation of my research is wrong. Prescriptive arguments about semantics are irrelevant and don’t fix the situation in the slightest, if anything they muddy the waters and worsen the quality of the discussion.
The whole point of signals is because you don’t know when you haven’t noticed that someone is around. People who only use signals when someone can see them are doing it wrong as much as those who never use it. Cars have blind angles and you can’t be aware of everything going on around you all the time. That’s why you signal, so everyone has clear indication of your intents. You also reminded me of an uncle who used who always forgot to signal before a maneuver and when he realized mid-turn he would then use the signal. Like, it is of no use anymore.
I told you, I’m not arguing. I actually agree on that point.
They are not opt-in, they are on by default and opt-out AI features. They said so themselves in their public communications. Also, they aren’t future possible considerations, they are concrete plans that are underway and have funds allocated and feature goals set.
Not arguing here. But just want to point out that disability subculture usually arises as a survival response in the face of discrimination and segregation. Everyone has a need for community and a sense of belonging. When broad hegemonic culture rejects you and your presence, belonging is found in the one distinctive feature that is the cause for the rejection and the source of cohesion with your peers. See also gay subculture as a response to homophobia, US black culture as a response to racism, feminist sorority subculture in response to misogyny, etc. So it is not rare to see disability subculture as a response to ableism. These communities are very important for security and preservation of individuals. Just as everywhere else, security is always a trade-off with something else.


Probably not, it will be another “special operation”.


She probably did. But the reviewer won’t know that as the paper (should) get anonymized before review. The author’s own name will be censored all the way throughout the paper with certain publishers.


It’s a catch-22 situation. You are supposed to disclose if you wrote the thing you’re citing, but also cite in third person, and also it should be obfuscated for the peer review. So, what happens is that you write something like “in the author’s previous work (yourownname, 2017)…” then that gets censored by yourself or whoever is in charge of the peer review, “in (blank) previous work (blank)…”. Now, if you’re experienced in reviews you can probably guess it is the author of the paper you’re reviewing quoting themselves. But you still don’t know who it is, and you could never guess right whether it is Ruth Gotian or not. So you’re back to the tweet’s situation.
Well, first: this is not just one color. There are 4 or 5 different color blocks mixed in the picture. Which makes it hard to pinpoint a name for a single shade. Second: if you know anything about color theory, it is quite obvious that it’s any combination of red and green (or yellow and magenta). In color theory this combinations both can make anything from bright orange to yellow to grapefruit red. Or, if you greatly desaturate it or charge it towards black in hue, to brown. Everyone here is calling it some form of brown as well. And it might actually be browny (the color) by the overall range of values in the picture.
As we all know, brown is just orange with context. Thus, the only technically accurate name it could be given is orange.
Orange, fite me…
Yes, the world is only the us, UK and the EU. No one else counts.