Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK Aii@programming.dev is meant for news and articles only.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    6 hours ago

    I hate AI because it’s a waste of finite resources.

    I hate it because it’s supported by a system of corruption and greed that is destroying the economy.

    I hate it because all major AI vendors have supported or abetted criminals in circumventing democracy worldwide.

    I hate it because it isn’t AI, it’s a LLM.

  • maplesaga@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    5 hours ago

    I’m excited because I think AI will break down and erode proprietary file formats, for ultimate portability between different software providers. That for me is a game changer in how software is used, and opens up real competition and innovation. Thats not anything as grand as the AGI these founders keep vaporwaring, but I think it will still be substantial.

  • Cuboos@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    14
    ·
    10 hours ago

    I don’t hate AI, in fact i think it could be very useful. But i can’t help but notice that it’s critics are mostly correct and it’s proponents are a bunch of fucking morons.

  • Xyphius@lemmy.ca
    link
    fedilink
    arrow-up
    15
    ·
    10 hours ago

    I hate “A.I.” because it’s not A.I. it’s an if statement stapled to a dictionary.

    Also because I can’t write the short name of Albert without people thinking I’m talking about A.I.

  • SleeplessCityLights@programming.dev
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    11 hours ago

    As far as I am concerned I am going to do the opposite of anything Sam Altman says. He is the ultimate snake oil salesman. He sold this trash to Microsoft, which I guess these days is pretty on brand.

    • CeeBee_Eh@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      13 hours ago

      Does it run on something that’s modelled on a neural net? Then it’s AI by definition.

      I think you’re confusing AI with “AGI”.

              • NotASharkInAManSuit@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 hour ago

                I’m tired of this thread. It’s not an intelligence, I did a whole thing already as to my reasoning why, I’d just prefer we call it VI and stop acting like it can think.

            • CovfefeKills@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              4 hours ago

              You are an idiot I am not playing your game I’ll just call you out on being an idiot. If you came across as genuine I would give you a history lesson but you are just an asshole looking to pick a fight. If you could articulate how exactly knowing of John McCarthy and countless others and their contributions would change anything about what you are doing I would be happy to google that for you.

              • NotASharkInAManSuit@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                4 hours ago

                Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?

                Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.

                Edit II: What fucking “game” was I playing by simply asking you to verify your claims?

                • CovfefeKills@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 hours ago

                  Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?

                  Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.

                  Edit II: What fucking “game” was I playing by asking you to verify your claims?

                  Lol dude like I said, knowing who wouldn’t change what you are doing.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      14 hours ago

      I don’t understand the desire to argue against the terms being used here when it fits both the common and academic usages of “AI”

      • NotASharkInAManSuit@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 hours ago

        There is no autonomy. It’s just algorithmic data blending, and we don’t actually know how it works. It would be far better described as virtual intelligence than artificial intelligence.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          That kind of depends how you define autonomy. Whichever way, I’m not sure I get how “virtual” is a better descriptor for implying a lack of it than “artificial” is.

          Also by “we don’t actually know how it works” do you mean that we can’t explain why a particular “decision” was made by an AI, or do you mean that we don’t know how AI works in general? If it’s the first that’s generally true, if it’s the second I disagree (we know a lot, but still have a lot to learn).

          • NotASharkInAManSuit@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            Autonomy is something that can think and act according to its own will, “AI” does not have any will of its own, it can strictly do only what it was programmed to do, there is no actual intelligence to it, it’s just a .exe. Artificial intelligence implies something created by means other than a naturally occurrence (an environmental or biological reaction) to create an intelligence, which is to imply it’s the same thing but made in a lab. Virtual intelligence implies a representation of what we imply to be signs of intelligence inside of a controlled space, it does not imply autonomy or a formed intelligence, which is exactly what these things are.

            When AI generates an answer or image or whatever through a neural network we don’t know how it works out what it is doing, we can analyze the input and output, but the exact formula of what the gaggle of algorithms and probability calculations are doing is inherently designed to be random and thus far do not have any reliable predictability, either because the program simply isn’t what we are wanting it to be or that we just don’t understand it yet. There is a Kyle Hill video on generative AI that goes over it better than I can, but avoid the bits of rationalism he tends to drop here and there in his videos anymore. It ties into the whole concept of intelligence and programming, a computer can only do exactly what we tell it to do how we tell it to do it, which is why when we tell it to smash shit together through the mystery box we made with intentionally unpredictable formulas adhered by mathematical analytics, algorithms, and data scraped into different categories it does just that, we know what pieces it can use and how it might use them but not how it will actually use them and what it might hallucinate because of the information meshing together without the program having any way to actually know what it’s looking at or analyzing and interpreting one form of data for another that changes the context of the output. In order for a computer to have intelligence we would have to have a full and quantifiable grasp on intelligence and cognition, and while we have modeled neural networks after what we see in brain activity that only goes as far as what we can see the brain do. We know how a lot of the brain works on a mechanical level, but we have no tangible grasp on how consciousness and intelligence work nor what they are outside of subjective concept and experience. Before we could program intelligence and consciousness we would first have to know what exactly is being coded and programmed to the most minute detail of quantification, it’s a bit foolish to believe we can program something we can’t even grasp, and even more foolish to think that it would be a good idea to blindly try.

            Also, look into rationalism and the zizians, those are the people trying to sell this shit to you. AI, as we are attempting at the current time, is literally cult shit based on a short story by Harlan Ellison. Granted, it’s a good read.

  • ejs@piefed.social
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    3
    ·
    1 day ago

    Most arguments people make against AI are in my opinion actually arguments against capitalism. Honestly, I agree with all of them, too. Ecological impact? A result of the extractive logic of capitalism. Stagnant wages, unemployment, and economic dismay for regular working people? Gains from AI being extracted by the wealthy elite. The fear shouldn’t be in the technology itself, but in the system that puts profit at all costs over people.

    Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).

    Enshittification, AI overview being shoved down all our throats? Tactics used to maximize profits tricking us into believing AI products are useful.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      8 hours ago

      I think you are right and yet so wrong.

      My problems with ai aren’t unique and I am no special snowflake who see through the matrix while everyone else is distracted. I am just a dude. I really doubt that the points tell I will bring up, is anything boring as generic arguments against ai.

      My problem with data theft is not based on the concern that artists has the rights on their work. I want them rewarded for your labor but in this case, it is not my primary issue. It is the hypocritical nature of company entirely based in IP law stealing IP protected work. I hate that the system is not ripping them into piece like Nintendo rips an online super smash tournament into pieces. It is so obviously “rules for thee, not for me”. You can claim that capitalism is causing that but I really don’t think capitalism requires this shit. Sure the rich and powerful are rich and powerful in capitalism because of capitalism, but special pledging for the elite existed in every system we have tried.

      I hate ai because people invest in the dumbest applications for it. LLM are trash. Voice cloner??? Wtf. Image generation? Why?? But for medical applications in which we have comparably amazing clean data, let’s invest into that a little bit. But x billions into LLMs please.

      I hate ai because the most brain dead application gets the most usage and people will tell you how it is bad but use it anyway. Then they obviously don’t have the computing power to run a decent local model and just pipe any personal or confidential information into the online service that tells you that the data will be used for training, so it can be leaking back out to other people.

      I hate ai because it is literally everything bad about society (e.g. nonconsental nudes) and tech (e.g. data collectors) and their interaction.

    • zd9@lemmy.world
      link
      fedilink
      arrow-up
      37
      arrow-down
      5
      ·
      1 day ago

      AI is just a tool like anything else. What’s the saying again? "AI doesn’t kill people, capitalism kills people?

      I do AI research for climate and other things and it’s absolutely widely used for so many amazing things that objectively improve the world. It’s the gross profit-above-all incentives that have ruined “AI” (in quotes because the general public sees AI as chatbots and funny pictures, when it’s so much more).

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        17 hours ago

        The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.

        There is literally no intelligence in a climate model. It’s just data + statistics + compute. Please stop participating in the pseudo-scientific grift.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          13 hours ago

          The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.

          And this is where you show your ignorance. You’re using the colloquial definition for intelligence and applying incorrectly.

          By definition, a worm has intelligence. The academic, or biological, definition of intelligence is the ability to make decisions based on a set of available information. It doesn’t mean that something is “smart”, which is how you’re using it.

          “Artificial Intelligence” is a specific definition we typically apply to an algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals. We take large amounts of data to train it and it “learns” and “remembers” those specific things. Then when we ask it to process new data it can make an “intelligent” decision on what comes next. That’s how you use the word correctly.

          Your ignorance didn’t make you right.

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            5 hours ago

            algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals

            Except the Neural Net model doesn’t actually reproduce everything real, living neurons do. A mathematician in the 70s said, “hey what if this is how brains work?” He didn’t actually study brains, he just put forward a model. It’s a useful model. But it’s also an extreme misrepresentation to say it approximates actual neurons.

        • zd9@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          15 hours ago

          lol ok buddy you definitely know more than me

          FWIW I think you’re conflating AGI with AI, maybe learn up a little

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            15 hours ago

            The term AGI had to be coined because the things they called AI weren’t actually AI. Artificial Intelligence originates from science fiction. It has no strict definition in computer science!

            Maybe you learn up a little. Go read Isaac Asimov

            • howrar@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              We have the term AGI because we sometimes want to communicate something more specific, and AI is too broad of a term.

            • zd9@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              14 hours ago

              lol Again, you definitely know more than me

              I always get such a kick reading comments from extremely overly confident people who know nothing about a topic that I’m an expert in, it’s really just peak social media entertainment

      • This is fine🔥🐶☕🔥@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        8
        ·
        1 day ago

        Are you talking about AI or LLM branded as LLM?

        Actual AI is accurate and efficient because it is designed for specific tasks. Unlike LLM which is just fancy autocomplete.

        • zd9@lemmy.world
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          15 hours ago

          LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            15 hours ago

            Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.

            • CeeBee_Eh@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 hours ago

              LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage

              That’s not true.

              • howrar@lemmy.ca
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                6 hours ago

                How is this untrue? Generative pre-training is literally training the model to predict what might come next in a given text.

        • 8andage@sh.itjust.works
          link
          fedilink
          arrow-up
          12
          arrow-down
          1
          ·
          23 hours ago

          Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you

          Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            17 hours ago

            Yes technology can be useful but that doesn’t make it “intelligent.”

            Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          11 hours ago

          Unlike LLM which is just fancy autocomplete.

          You might keep hearing people say this, but that doesn’t make it true (and it isn’t true).

    • NoneOfUrBusiness@fedia.io
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      1 day ago

      Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).

      I’ve seen it said somewhere that, with the advent of AI, society has to embrace UBI or perish, and while that’s an exaggeration it does basically get the point across.

      • draco_aeneus@mander.xyz
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        1 day ago

        I don’t think that AI is as disruptive as the steam engine, or the automatic loom, or the tractor. Yes, some people will lose their jobs (plenty of people have already) but the amount of work that can be done which will benefit society is near infinite. And if it weren’t, then we could all just work 5% fewer hours to make space for 5% unemployment reduction. Unemployment only exists in our current system to threaten the employed with.

        • missingno@fedia.io
          link
          fedilink
          arrow-up
          10
          ·
          1 day ago

          You might be right about the relative impact of AI alone, but there are like a dozen different problems threatening the job market all at once. Added up, I do think we are heading towards a future where we have to start rethinking how our society handles employment.

          A world where robots do most of the hard work for us ought to be a utopia, but as you say, capitalism uses unemployment as a threat. If you can’t get a job, you starve and die. That has to change in a world where we’ll have far more people than jobs.

          And I don’t think it’s as simple as just having us all work less hours - every technological advancement that was once said would lead to shorter working hours instead only ever led to those at the top pocketing the surplus labor.

          • draco_aeneus@mander.xyz
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            Yes, I 100% agree with you. The ‘working less’ solution was just meant as a simple thought exercise to show that with even a relatively small change, we could eliminate this huge problem. Thus the fact that the system works in this way is not an accident.

  • Kacarott@aussie.zone
    link
    fedilink
    arrow-up
    27
    ·
    1 day ago

    I mean, I find the tech fascinating and probably would like it, except that I hate the way it was created, the way it is peddled, the things it is used for, the companies who use it, the way it “talks”, the impact it has had on society, the impact it has on the environment, the way it is monetised, and the companies who own it.

    And all that makes it difficult to “just appreciate the tech”

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        3
        ·
        24 hours ago

        Machine Learning and the training and use of targeted, specialized inferential models is useful. LLMs and generative content models are not.

        • frank@sopuli.xyz
          link
          fedilink
          arrow-up
          10
          arrow-down
          5
          ·
          23 hours ago

          What! LLMs are extremely useful. They can already:

          -Funnel wealth to the richest people -Create fake money to trade around -Deplete the world of natural resources -Make sure consumers cannot buy computer hardware -Poison the wells of online spaces with garbage content that takes 2s to generate and 2 minutes to read

        • Endmaker@ani.socialOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          Let’s not forget about traditional AI, which have served us well for so long that we stopped thinking of them as AI.

              • gravitas_deficiency@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                6
                ·
                23 hours ago

                In the strictest sense of the technical definition: all of what you are describing are algorithmic approaches that are only colloquially referred to as “AI”. Artificial Intelligence is still science fiction. “AI” as it’s being marketed and sold today is categorical snake oil. We are nowhere even close to having a Star Trek ship-wide computer with anything even approaching reliable, reproducible, and safe outputs and capabilities that are fit for purpose - much less anything even remotely akin to a Soong-type Android.

                • LwL@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  22 hours ago

                  In the strictest sense there is no technical definition because it all depends on what is “intelligence”, which isn’t something we have an easy definition for. A thermostat learning when you want which temperature based on usage stats can absolutely fulfill some definitions of intelligence (perceiving information and adapting behaviour as a result), and is orders of magnitude less complex than neural networks.

                • Endmaker@ani.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  23 hours ago

                  algorithmic approaches that are only colloquially referred to as “AI”. Artificial Intelligence is still science fiction

                  That’s why this joke definition of AI is still the best: “AI is whatever hasn’t been done yet.”

                  I have forgotten all working definitions of AI that CS professors gave except for this one 🙃

      • neo2478@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        24 hours ago

        I am still waiting for evidence of that. Tried it for a while for general questions and for coding and the results were at best meh, and most of all it was not faster than traditional search.

        Even so, if it was really useful, it would still not be worth the fact that it is based on stolen data and the impact to the environment.

        • Endmaker@ani.socialOP
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          22 hours ago

          AI is a super broad field that encompasses so many tech. It is not limited to the whatever the tech CEOs are pushing.

          In this comment section alone, we see a couple examples of AI used in practical ways.

          On a more personal level, surely you’d have played video games before? If you had to face any monster / bot opponents / etc, those are all considered AI. Depending on the game, stages / maps / environments may be procedurally generated - using AI techniques!

          There are many more examples - e.g. pathfinding in map apps, translation apps -, just that we are all so familiar with them that we stopped thinking of them as AI.

          So there are plenty of evidence for AI’s usefulness.

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            15 hours ago

            Langton’s ant can procedurally generate things, if you set it up right. Would you call that AI?

            As for enemies in gaming, it got called that because game makers wanted to give the appearance of intelligence in enemy encounters. Aspirationally cribbing a word from sci-fi. It could just as accurately have been called “puppet behavior”… more accurately, really.

            The point is “AI” is not a useful word. A bunch of different disciplines across computing all use it to describe different things, each trying to cash in on the cultural associations of a term that comes from fiction.

              • AppleTea@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                15 hours ago

                I think what people are struggling to articulate is that, the way AI gets thrown around now, it’s basically being used as a replacement for the word “algorithm”.

                It’s obfuscating the truth that this is all (relatively) comprehensible mathematics. Even the black box stuff. Just because the programmer doesn’t know each step the end program takes, doesn’t mean they don’t know the principals behind how it was made, or didn’t make deliberate choices to shape the outcome.

                There’s some very neat mathematics, yes, and an utterly staggering amount of data and hardware. But at the end of the day its still just an (large) algorithm. Calling it AI is dubious at best, and con-artistry at worst.

          • neo2478@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            23 hours ago

            Fair enough. I was using the new colloquial definition of AI which actually mean LLMs specifically.

            I thing the broader AI which includes ML and all your other examples are indeed very useful.

  • inari@piefed.zip
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    1 day ago

    I’m neutral-positive toward local AI, not so much toward Clawd-style agents impersonating humans on the web

    • zd9@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      With openclaw and moltbook recently, the threat of taking many white collar jobs has shaken me to the core. My job may be gone in the next few years, and I do AI research directly…

  • bibbasa@piefed.social
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 day ago

    i was a vocal synth nerd before i was a fedi/foss nerd. we’ve been doing ai since before the ai bubble, and i think vocal synths are a good example of ethical ai.

    vocal synths are still a creative tool where you compose the music, lyrics and expression yourself, but the ai engine makes the voice more realistic sounding. you purchase “voice banks” which are effectively training data for a single voice and this voice bank comes from a “voice provider” who is a paid singer that will record samples for the vocal synth engine. a lot of voice providers request to have the voice bank “characterized” to sound different from themselves, and the vocal synth company will do so. compare KAF to KAFU CEVIO.

    this is a process based entirely on consent, something openai and the rest of them lack, they just send out an army of scrapers to take anything and everything they can get their hands on, consent be damned.

    actually speaking of KAF, i was excited because KAFU was coming to synth v, since i don’t have CEVIO (and don’t speak japanese). but unfortunately, KAFU SV was cancelled because the synth v ai engine made her sound too much like herself, and most likely they couldn’t modify the voice bank to sound differently enough and they cancelled it. at least, that’s the prevailing theory.

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    1 day ago

    I hate “AI” because it’s been built of the forced exploitation of untold millions of artists and creative laborers, without even so much as consent, let alone compensation…

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        14 hours ago

        I do hate unregulated capitalism.

        But that’s not the only problem, even people in the non-profit space, as well as the supposed “communists” of the CCP in China are using and abusing machine learning techniques for the purposes of surveillance, oppression and exploitation.

        It’s not the technology’s fault, obviously. But at that point this becomes a bullshit “guns don’t kill people, people kill people” argument.

    • zd9@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      1 day ago

      You hate this narrow use of AI in the commercial space . AI is so much larger and is used in many more amazing things that actually improve humanity than just making funny pictures and chatbots to squeeze more profit out of consumers. I know this because I’ve researched AI for climate for a long time now.

    • Endmaker@ani.socialOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      1 day ago

      forced exploitation of untold millions of artists and creative laborers, without even so much as consent, let alone compensation…

      In this case, is it AI that you truly hate?

      I think this comment said it best.