Why is the Ars community so anti-AI?

teddox00

Smack-Fu Master, in training
55
I run to the comment section of every article on AI as it's always, to me, a baffling outburst of boomer-like anti-AI sentiment. I would think most people who are into science and technology enough to frequent ARS, would be excited by the advancement and the possibilities the future hold for the technology. I understand hesitation at some of the negative possibilities but this isn't even what most comments are expressing.

There seems to be a lot of people upset that it's not truly AI. They repeat that loudly in the comments of every article. We understand that ChatGPT is not Data from Star Trek, or C3PO, but that's not how technology works, you don't just jump from nothing to Data. And strangely the same people that complain about this are also the ones with the doomsday predictions of it becoming HAL.

I think a lot of people when debating whether or not it is AI often get too focused in on the word 'intelligence' that they forget the word 'artificial' comes in front of it. This is directly from an Oxford definition for artificial:

(of a situation or concept) not existing naturally; contrived or false.

False, false intelligence, so not true intelligence but a simulation of it. That's exactly what LLMs are doing. They are trying to emulate intelligence, not be intelligence.

Yes we know they work by predicting the next token in a series of tokens, based on what the user has indicated they want generated. Which is why people seem to love to call it industrial grade autocomplete, and they're not necessarily wrong. However, if we look at social situations between humans, especially those who are neurodivergent, it works almost exactly the same. As someone with social anxiety and ADHD I can tell you that my mind is going 100 mph in social situations trying to come up with the next 'token' that is relevant and would contribute best to the current conversation. So I don't accept 'auto-complete' as a reason it's not artificial intelligence. And that's not even mentioning the multiple studies ongoing on 'emergent behaviors' of LLMs with a large number of parameters.

And you can disagree with those things, or think it's all BS. But what really concerns me is there is almost a reddit-like, hivemind, mob mentality about the whole thing, where if anyone tries to point out something differently or point out positive things about the development of AI that's ongoing, they get shouted down and down voted into oblivion until their viewpoint is literally hidden. Where is the contrast of opinion, where is the open minded discussion? Why does it seem like a large part of the ARS community feels completely justified in sitting on their porch shaking their fist and yelling at AI or AI enthusiasts to get off their lawn?
 

invertedpanda

Ars Tribunus Militum
1,844
Subscriptor
One of the justifiable criticisms of AI is the ethics of training data sourcing; If the data was ethically sourced (such as opt-in) that'd probably reduce the amount of push-back against AI.

I myself enjoy making use of AI (I LOVE Adobe Firefly, and have also followed Stable Diffusion for a while), but it is being shoved into a lot of things that aren't necessarily better.. Amazon, for example, fails pretty hard to use it to answer questions on products, and Google's use just amplifies existing gray-hat SEO content, rather than fixing the fundamental problem with search results not being nearly as valuable anymore.
 

karolus

Ars Tribunus Angusticlavius
6,685
Subscriptor++
Wouldn't necessarily say anti-AI, but just skeptical of the hype, and also a bit reticent given the lack of concern regarding risks and negative effects. There have been FP stories covering the essential gutting of the the safety team at Open AI in the wake of the Sam Altman brouhaha, and examples of AI in the wild producing negative results. This includes using it for decision making in hiring, as well as military use in the Gaza conflict—even though there are known problems grave concerns. As with any new technology adoption—there probably will be some major instances of people hurt by its effects before regulation catches up to it.
 

Not_an_IT_guy

Ars Scholae Palatinae
1,214
Subscriptor
My completely unjustified theory (see user name) is that the people in Ars are very aware of the downsides and risks that are not being discussed in every article that discusses AI (much like blockchain or cryptocurrency) and take offense at that. Also, most of Ars appear to be very progressive (politically) and many of the AI uses discussed appear to provide ways for the capital class to take further advantage of the worker class.
 

fractl

Ars Tribunus Militum
2,294
Subscriptor
AI, in its current state, is generally not trustworthy. Try Google’s AI-assisted search and get ridiculous answers about adding glue to your pizza sauce or cooking food in gasoline, for example. Garbage in = garbage out. Without curating the training data, you are going to get the worst of the Internet which may skew your training.

I‘ve used Copilot to summarize meetings if I am late. It seems to do a good job (still sucks with TLAs tho) but I don’t go back and double-check that the summary is accurate.

I also have concerns about people’s data being used without permission. Seems like another way for the monied class to abuse the common folk.
 

koala

Ars Tribunus Angusticlavius
7,579
Front page comments and this forum are pretty much different worlds (even though they intersect a bit).

The weird thing to me about LLMs is that they are truly magic (arithmetic with language!), and that I believe they have genuine uses- it's not like other fads. But the hype is overblown, the dangers and there, so the backlash is expected human behavior of everyone (just as people jumping into the hype is also natural human behavior).
 
This is an interesting one, because you are right, you would think a group such as composes this community would be more enthusiastic about AI and less doom and gloom about AI.

But I think this crowd also does a good job of seeing reality in ways that are divorced from fiction. Let's take the OP's examples of Data and C3PO. In reality, I'm betting humanity doesn't end up working alongside Data and C3PO. In reality, I'd put my money on capitalists using Data to displace/replace actual human workers for cheaper, round-the-clock capable workers.

In addition to that, AI represents real potential for harm and risk in the space of misinformation and sowing dissent. We've already seen that happen in real time before our eyes.

Another vector that AI poses danger is with network security, which is a big one for small to medium sized businesses.

So yeah, in short, I guess this crowd is a little leery of AI because we do see and have seen the potential downsides. And I don't think those downsides get enough coverage, and I don't think there are enough rules/regulations/guardrails in place to seriously prevent them or provide for their inevitability.

AI has a lot of exciting, positive possibilities. But it really is a massive double-edged sword, IMO, because for every exciting positive possibility there is a negative that someone will wield against others for personal gain.
 

linnen

Ars Tribunus Militum
2,122
Subscriptor
Most likely because the LLM's that is hyped and MBA marketed as 'Artificial Intelligence' has none of the characteristics of human (or any other animal) intelligence that ARS computer and science fans were expecting. No self-awareness or learning new things. Nothing exhibiting an ethos in any way, shape or form. Even the typical LLM's 'Nazism as an ethos' behavior comes from training data, not self-determination.

If current AI's were more like what was hoped-for/dreaded a decade and more ago, we would be discussing implementing Asimov's "Three Laws of Robotics" and slavery/13th through 15th Amendments issues. Or whatever version of the UN's "Rights of Humanity" the rest of the world uses.
 

sword_9mm

Ars Legatus Legionis
22,802
Subscriptor
Front page comments and this forum are pretty much different worlds (even though they intersect a bit).

Very much so.

IME front page comments go for the most likes; so you get a cacophony of the same thing to get that upvote.

Look at ANY Elon Musk article. Just pop in and type something like 'Elon's an asshole and should go away' or whatever and watch the upvotes tick. Try to actually discuss anything and you just get down votes because 'disagreement'. AI articles, EV articles; pick em; just hop in and go with the group and get them upvotes (for whatever they're worth).

Yeah fucking stupid system but whatever.

As for 'AI' I'm interested to see where it goes but not super in on the hype. I do work with a couple guys who are ate up with it and fake-coins so whatever. The art stuff is interesting as someone with no artistic skill and I 'like' the idea of copilot or whatever but haven't seen it do anything much in what I'm hoping. I guess a few years of work and maybe we'll get an 'AI assistant for everyone' that's not an idiot.

Probably one of those things that will break certain things and make other things better.

Progress marches on for good or ill.
 

koala

Ars Tribunus Angusticlavius
7,579
(So if you ask me "why are you so negative?", my answer is, I'm tired of hearing about AI constantly. If you are trying to convince me I should be positive, you'd better show me how it can be useful. But I'm already aware of some useful uses of the technology. But asking me "why are you so negative" [edit: and leaving out a derogative term floating around in your sentence] is likely only going to reinforce my viewpoint.)
 

Louis XVI

Ars Tribunus Angusticlavius
9,984
Subscriptor
There are three big reasons for why I've been down on AI:
  • Its consumer-facing uses tend to be either badly broken (the new Google search) or dystopian (Microsoft's spyware). At least at this stage, it appears to make life worse, rather than better.
  • Its use in art seems upsettingly inhuman. I enjoy experiencing art (be it music, TV shows, movies, paintings, whatever) as a way of connecting with human creativity, talent, and emotion. The AI attempts at art I've seen so far come across as literally soulless pastiches of genuine artistry. It's like the uncanny valley on a much greater scale.
  • Some of the useful work LLMs can do, such as summarizing data sets, are shortcuts that reduce analytical rigor. For example, some of my fellow school psychologists are using LLMs to summarize psychological rating scale data when writing special education evaluation reports. For me, a lot of the thinking and analysis of what the data means about the child takes place through that summarization process. If I'm just glancing at a table of numbers and an automated verbal summary, I won't think as deeply about the data as I would if I had to summarize it myself.
 

rcduke

Ars Scholae Palatinae
1,751
Subscriptor++
With regards to AI in general, I'm against the vacuuming of data for training purposes without providing compensation to the authors. People write books or create art, and for a corporation to find their document on the internet, copy it, feed it into their LLM, then output something similar. The corporation then expects to be paid for creating this art but didn't pay the artist for the copyright license.

The article yesterday about NVIDIA defending websites filled with illegally uploaded copyrighted works fits right into this thought. NVIDIA doesn't want the book publishers to take down those sites, because NVIDIA can scrape the data and use it to feed into their algorithms. I absolutely abhor the textbook publishing industry, but they too should get paid if a LLM is using their books for training.

There's no effective way to stop this vacuuming of data because the corporations can take it and it's up to the artist or author to sue them. It's always opt-out, which is never beneficial for consumers.
 
For one, the issues of ethics on the subject not being in a finished state yet. I don't think people would be quite as against companies using data to train it if there was a firm set of permissions in place, even if that was carte blanche use (though, of course, it'll still have defenders and objectors to that.)

And for another, it's just being used far beyond its current capabilities. Its absolutely got its uses, but for example, it's so often somehow both the first line of customer relations, and yet at the same time companies will try to claim its answers hold no weight.

I say give it time and people will accept it, once its actually a mature technology. To me I think people on a site like this just see it as it actually is; immature and unready for prime time yet.
 
Last edited:

Genome

Ars Tribunus Angusticlavius
9,203
Somebody summed it up in the absolutely best way possible:

"Why are we using AI to help the tedious people do the creative stuff instead of helping the creative people avoid the tedious stuff?"

I think a lot of negative sentiment comes from that and the fact that tech is run by extreme bros now and everybody hates them, for good reasons.

Apart from the fact that generative AI (as it is today) is just a copyright infringement machine, the other AI models that the big firms are trying to sell AI don't really help in day-to-day use. Google basically broke search with Gemini. Microsoft decided that their model would be snapshotting your PC all day, every day. And who knows what kind of hellscape Meta are preparing in their usual bumbling around?

And as for the big player, OpenAI? I worry a lot about their leadership and their Great Leader. If ChatGPT is supposed to lead to AGI, I am incredibly concerned that Sam Altman is the one leading them there. He feels like Carter Burke from Aliens. Or Ted Faro from Horizon Zero Dawn. It's quite clear that he has drunk the kool-aid and that he really believes in AI, but I also think that he's blinded by it and won't see the dangers.

In fact, all of the tech leaders talking AI today worry me. I don't have much confidence in Satya Nadella or Sundar Pichai either, but their boards might be able to keep them in check. Looking at CoPilot, it's more Office Hell for now but maybe it has potential? And Gemini is just a clown show at this point, but once it gets better I guess it could actually be useful in search? Zuckerberg and Musk don't fill me with confidence either, but Zuck only wants AGI so he can become a real boy and based on his cars, Musk's AGI will self-terminate before it can do any real harm.

(Also, Ars isn't an acronym.)
 

Felix K

Ars Tribunus Militum
1,805
lots of people, including me, got burned on the promise of technology only to see it taken up by grifters., charlatans and maybe a few who lost their moral compasses.

blockchain, NFTs, metaverses, web3, now AI. To see the internet, the thing I watched grow up and hope to be the great democratizer and educator of all people, become weighed down by endless, iterated and regurgitated drivel is hard indeed.

I'm working on something I hope can add some humanity to AI... emotional intelligence... but otherwise... these days I'm more and more likely to thoreau out of here.
 

OrangeCream

Ars Legatus Legionis
55,362
My completely unjustified theory (see user name) is that the people in Ars are very aware of the downsides and risks that are not being discussed in every article that discusses AI (much like blockchain or cryptocurrency) and take offense at that. Also, most of Ars appear to be very progressive (politically) and many of the AI uses discussed appear to provide ways for the capital class to take further advantage of the worker class.
My interactions with the naysayers leads me to believe the opposite:
They don’t know what AI is, they only know what ChatGPT is, and so paint the entire field as similarly useless.

Because they base their entire AI worldview on ChatGPT (and similar tools from Google and Microsoft), they can only imagine all AI is the same, without knowing that AI is fundamentally a model that classifies (true or false) or predicts (yes or no) and added complexity arises by making the model more and more complex and taking more inputs to perform better.

Meaning an LLM is just a model that maps the relationships between words and phrases, without knowing their meanings. A better use for an LLM than a chatbot might be language translation, assuming there exists a sufficiently large body of translated text as training material.
 

Felix K

Ars Tribunus Militum
1,805
Meaning an LLM is just a model that maps the relationships between words and phrases, without knowing their meanings. A better use for an LLM than a chatbot might be language translation, assuming there exists a sufficiently large body of translated text as training material.

I've only tested it on Croatian, French and Italian but Claude/ChatGPT make excellent translators. Good enough that my friend, a poet, thought the ChatGPT translation of her work was better than the professional translator's hired by her publishing house.
 
  • Like
Reactions: Nevarre

teddox00

Smack-Fu Master, in training
55
Most likely because the LLM's that is hyped and MBA marketed as 'Artificial Intelligence' has none of the characteristics of human (or any other animal) intelligence that ARS computer and science fans were expecting. No self-awareness or learning new things. Nothing exhibiting an ethos in any way, shape or form. Even the typical LLM's 'Nazism as an ethos' behavior comes from training data, not self-determination.

If current AI's were more like what was hoped-for/dreaded a decade and more ago, we would be discussing implementing Asimov's "Three Laws of Robotics" and slavery/13th through 15th Amendments issues. Or whatever version of the UN's "Rights of Humanity" the rest of the world uses.


I agree with the idea that the current offerings as they are now are not good simulations of intelligence and seem more like PR bots. However before all the guardrails and the pre-prompting and forced helpful tone etc... I used ChatGPT like the 3rd day it was available in 2022 and my god it was mind blowing. It DID feel like talking to another human. All that is gone now though with all the changes they've made to it.

The problem was people were already abusing it so in order to be attractive in the future for the B2B space, and to stop people from generally being evil they had to put guardrails on it. I would love to have that version back though, especially with the advancements since then it would be interesting to see if it's closer to what people would think of when they think about companion AIs.

But you are definitely right, that right now it's not much better than chat bots that were available before the release of ChatGPT.
 
  • Like
Reactions: VividVerism

Dmytry

Ars Legatus Legionis
10,279
Could it possibly be because all these LLM products we've been seeing are utter over-hyped garbage, complete with outright fraud (like e.g. when they achieve high performance on a test the answers to which were part of the training dataset).

We went from Google freaking out that they would lose search market to people asking an AI developed by another company, to the point where Google has to artificially inflate their AI usage by inserting it before the search results, because no, most people do not actually want to use AI instead of searching the web.

It started with the concern that companies would miss out on the next big thing. Once a lot of money go sunk into not missing out on the next big thing, they absolutely do not want to hear anything about the possibility that they did not correctly identify the next big thing.
 

TSBasilisk

Ars Centurion
352
Subscriptor
There was a thread here last year or so asking why Ars was so anti-crypto. Feels like a lot of the answers there apply here too.
  • Artificial hype being driven largely by the people who stand to make a profit
  • AI apocalypse warnings from the creators being used to make their product sound more powerful
  • The "product" is not fit for use (hallucinations, etc.) but is still being pushed to consumers
  • Soaking up vast amounts of investor money but not really generating any profit now or in the immediate future
  • Consumes vast amounts of electronics and energy for mediocre products even as we need to be moderating consumption
  • Many proponents or leaders are crypto bros themselves, see Altman, a space known for grift, scams, and flat out lies
  • Many proponents are Effective Altruists or driven by similar philosophies which focus on Soaking up as much money as they can, regardless of damage done to others, because they could potentially use it for some greater good eventually, maybe
  • Promises of freeing common workers and creative types while actually putting them out of work if the actual business proposals for LLMs go through
  • Everyone is hopping on the train, jamming AI into random products like they did for blockchains, NFTs, etc. (rolled my eyes so hard when an equipment supplier declared they had integrated AI into their software)
 
Last edited:
I'm very much anti LLM because imho the technology ISN'T there yet. People keep marveling at how great it is, and yes it's getting better but it's not there yet. But at the same time because of all the hype we see LLMs getting implemented in a lot of places and it's actively making things worse. Search engines no longer give me useable results because the LLM decides I didn't mean to search what I actually searched. Customer service no longer has a phone number or even humans at the back end of their CS chat window and the chatbots no longer recognize they're stuck in a loop or that they don't have an answer when they run through their basic script and now you're unable to get a meaningful answer from a human being. And this shit is spreading. It stinks. I don't like it.
 
Last edited:

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
Couple reasons:

1. It's based on systematized theft and exploits creators without compensation, for the profit of private equity ghouls.

2. Generative AI output lies. Sometimes in outrageous ways that are obvious, but also sometimes in subtler ways. I'm reminded of a bug in Xerox copiers where numbers/letters were silently substituted, which is catastrophic in the case of financial statements etc. Generative AI is designed to do this, meaning when it's used for legal filings it invents cases that don't exist etc. These risks are not widely understood by the public.

3. It enables bad actors and abusive use cases, like non-consensual nudes, fake news, etc.

4. It is extremely energy intensive and increases climate emissions.

5. The rush to implement AI features when the underlying functionality isn't anywhere near ready is a kind of social contagion tech execs are vulnerable to and ruins existing products that worked fine as they were.

To summarize, it does bad things in bad and also annoying ways, while exploiting creators for profit while driving them out of business.
 

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
But what really concerns me is there is almost a reddit-like, hivemind, mob mentality about the whole thing, where if anyone tries to point out something differently or point out positive things about the development of AI that's ongoing, they get shouted down and down voted into oblivion until their viewpoint is literally hidden.
I think there are huge issues with generative AI as currently constituted - copyright theft, labor disempowerment, enshitification of things like Google search. There’s also a nauseating hype train that is inherently off-putting for people who self-identify as independent thinkers.

BUT

I think you’re also correct to point out that the negative reaction against it is so strong that some people may be missing out on use cases where generative AI is legitimately - and in some cases groundbreakingly - useful. My wife is a nurse practitioner. She spends something like 30 hours a week in the clinic and at least another 30-40 writing up visit notes, lab notes and prepping charts for the next visits - it’s wholly unsustainable and the burn out rate in primary care is catastrophic in my state. A friend of ours is a doctor at Mayo who has been trialing an AI system that listens to the entirety of a visit and then writes up the visit notes. She has to review and approve them, but she estimates that it’s saving her on the order of 20 hours a week. I would give just about anything to have 20 more hours a week of my wife’s time for myself and our two children. And a system that could lower the burnout rate for primary care doctors would radically transform the quality of care in our entire state.

So yes, be skeptical of the AI hype. And don’t let the theft go unnoticed. But also don’t blind yourself to real opportunities and don’t pretend that you’re not watching a sea-change in capability happen before your eyes.
 
I don’t love the crypto/AI comparison. Cryptocurrencies are fundamentally useless, like actually doing something was not part of the design. They exist primarily to be a thing that derives value from everybody imagining that it has value (like gold or money).

These generative AI tools, I think they are overhyped, but they are designed to actually have some use. They might not be good at their intended use yet, but at least they are trying to accomplish something.

Cryptocurrency was sort of only a scam, while these generative AI tools are a potentially useful future thing that some scammers have also glommed on to.
 

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
I run to the comment section of every article on AI as it's always, to me, a baffling outburst of boomer-like anti-AI sentiment. I would think most people who are into science and technology enough to frequent ARS, would be excited by the advancement and the possibilities the future hold for the technology. I understand hesitation at some of the negative possibilities but this isn't even what most comments are expressing.
Here's the thing. Tech was given a lot of leeway in terms of its social impact, and it has abused this largesse. This has increased awareness of the abusive patterns, which are on full display for AI companies. You'd find a lot of other things that were accepted easily facing more skepticism if they happened today, so in one sense this is partly just bad timing. But also, the skepticism is very well deserved due to the abusive patterns.

There seems to be a lot of people upset that it's not truly AI. They repeat that loudly in the comments of every article. We understand that ChatGPT is not Data from Star Trek, or C3PO, but that's not how technology works, you don't just jump from nothing to Data. And strangely the same people that complain about this are also the ones with the doomsday predictions of it becoming HAL.
This is somewhat separate to most of the abusive patterns, but it actually is a problem that it's not Data because Data is capable of not lying and LLMs are not. LLMs are plausibility engines, they generate output that is statistically similar to what a human would, but factual correctness is not a statistical property so bigger networks are not going to fix this. Even though people like Altman are maintaining the pretense it will.

I think a lot of people when debating whether or not it is AI often get too focused in on the word 'intelligence' that they forget the word 'artificial' comes in front of it.
Sounds like you've done a very bad job of understanding the criticisms because it's not playing games with definitions, it's concrete bad outcomes. You can't deny a bad outcome by pointing at the dictionary (you can try but you will fail).

Yes we know they work by predicting the next token in a series of tokens, based on what the user has indicated they want generated. Which is why people seem to love to call it industrial grade autocomplete, and they're not necessarily wrong. However, if we look at social situations between humans, especially those who are neurodivergent, it works almost exactly the same. As someone with social anxiety and ADHD I can tell you that my mind is going 100 mph in social situations trying to come up with the next 'token' that is relevant and would contribute best to the current conversation. So I don't accept 'auto-complete' as a reason it's not artificial intelligence.
Don't care, not relevant to the actual criticisms. It's not necessary for me to address this because it's not material to the criticisms I want to make. Seems more like this is what you would dearly like the criticisms to be, rather than what they actually are.

And you can disagree with those things, or think it's all BS. But what really concerns me is there is almost a reddit-like, hivemind, mob mentality about the whole thing, where if anyone tries to point out something differently or point out positive things about the development of AI that's ongoing, they get shouted down and down voted into oblivion until their viewpoint is literally hidden.
Well this isn't reddit and your post won't be hidden for low score, but you are going to have to deal with a bunch of people with robust arguments that, as far as I can tell, you haven't even bothered to be aware of. I don't think it's going to go well for you.
 

Sunner

Ars Praefectus
4,330
Subscriptor++
I don't mind "AI the tech", I do generally despise pretty much all the AI bros though. And by AI bros I mean the Microsofts, Googles, Teslas and whatever else out there.

Use AI to detect possible colon cancer that a human doctor can then scrutinize? Great.
Have $shitty_ai_company steal the patient's data while showing the patient ads for rectal lubricants? Fuck yourself.
 
Last edited:

linnen

Ars Tribunus Militum
2,122
Subscriptor
Couple reasons:

1. It's based on systematized theft and exploits creators without compensation, for the profit of private equity ghouls.

2. Generative AI output lies. Sometimes in outrageous ways that are obvious, but also sometimes in subtler ways. I'm reminded of a bug in Xerox copiers where numbers/letters were silently substituted, which is catastrophic in the case of financial statements etc. Generative AI is designed to do this, meaning when it's used for legal filings it invents cases that don't exist etc. These risks are not widely understood by the public.

3. It enables bad actors and abusive use cases, like non-consensual nudes, fake news, etc.

4. It is extremely energy intensive and increases climate emissions.

5. The rush to implement AI features when the underlying functionality isn't anywhere near ready is a kind of social contagion tech execs are vulnerable to and ruins existing products that worked fine as they were.

To summarize, it does bad things in bad and also annoying ways, while exploiting creators for profit while driving them out of business.
I'm rather dubious about your item #2, "Generative AI output lies." Lying requires at its base some valuation of truth values, which Generative AI's and LLM's don't do. One cannot even state that "these programs are not lying because they believe in their output" as they don't have a belief system with current technology.

One cannot even say that "Generative AI output bullshit" because these programs are not trying to persuade* anyone.

*That their output can and will used for persuasion is more a "Lies, Damn'd Lies, and Statistics" problem than anything else.
 
  • Like
Reactions: RAOF

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
I'm rather dubious about your item #2, "Generative AI output lies." Lying requires at its base some valuation of truth values, which Generative AI's and LLM's don't do. One cannot even state that "these programs are not lying because they believe in their output" as they don't have a belief system with current technology.
Not really a distinction I see much value in litigating.
 

UserIDAlreadyInUse

Ars Praefectus
3,602
Subscriptor
I use it, it's amusing and helps me do some things I can't do myself, but I feel we're training it to do the wrong things.

We should be developing AI to do the tedious, soul-crushing work, to free humans to create.
Instead, we're training AI to create, leaving humans competing to do the tedious, soul-crushing work to survive.
 

SunRaven01

Ars Tribunus Angusticlavius
8,655
Moderator
/// OFFICIAL MODERATION NOTICE ///

Ars -- it's Latin -- is not an initialism and you do not need to capitalize it. I have edited your thread title because it was bugging the hell out of me.
Ken Fisher: "It's an attributive construction in Latin which means "art of technology."
 

OrangeCream

Ars Legatus Legionis
55,362
To summarize, it does bad things in bad and also annoying ways, while exploiting creators for profit while driving them out of business.
It also does good things in interesting and useful ways, unrelated to and independent of creators and their survival. Which is like a baby and its bath water. One of them is dirty and the other one is valuable.
 

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
It also does good things in interesting and useful ways, unrelated to and independent of creators and their survival. Which is like a baby and its bath water. One of them is dirty and the other one is valuable.
These unfortunately look like they are being lost in the mad scramble to do the bad ideas. I think what's happening is execs think the stock market will punish them for not doing enough AI, and they might be correct, so they push out turds like Google's search AI results that give actively dangerous results. Better to ruin your product and actively alienate users than be seen to be slow on the uptake by the infinite clown car of the market.
 

Thank You and Best of Luck!

Ars Legatus Legionis
18,171
Subscriptor
The wildly overwrought hype that’s absolutely suffocating other avenues of innovation and actual value creation.

To instead go all-in on expensive, pointless, solutions in search of a problem for whom the primary beneficiaries are thinly veiled psychopaths and narcissists.

I work in the technical side of this world (ML) and on the business side of this world (VC & enterprise). The hate is for the hype. LLMs are neat. They’re about 1% as useful as is being rammed down our throats constantly though. The hype results in a MASSIVE misallocation of time, energy, and capital.
 

Hangfire

Ars Tribunus Angusticlavius
7,353
Subscriptor++
I run to the comment section of every article on AI as it's always, to me, a baffling outburst of boomer-like anti-AI sentiment. I would think most people who are into science and technology enough to frequent ARS, would be excited by the advancement and the possibilities the future hold for the technology. I understand hesitation at some of the negative possibilities but this isn't even what most comments are expressing.

There seems to be a lot of people upset that it's not truly AI. They repeat that loudly in the comments of every article. We understand that ChatGPT is not Data from Star Trek, or C3PO, but that's not how technology works, you don't just jump from nothing to Data. And strangely the same people that complain about this are also the ones with the doomsday predictions of it becoming HAL.

I think a lot of people when debating whether or not it is AI often get too focused in on the word 'intelligence' that they forget the word 'artificial' comes in front of it. This is directly from an Oxford definition for artificial:

(of a situation or concept) not existing naturally; contrived or false.

False, false intelligence, so not true intelligence but a simulation of it. That's exactly what LLMs are doing. They are trying to emulate intelligence, not be intelligence.

Yes we know they work by predicting the next token in a series of tokens, based on what the user has indicated they want generated. Which is why people seem to love to call it industrial grade autocomplete, and they're not necessarily wrong. However, if we look at social situations between humans, especially those who are neurodivergent, it works almost exactly the same. As someone with social anxiety and ADHD I can tell you that my mind is going 100 mph in social situations trying to come up with the next 'token' that is relevant and would contribute best to the current conversation. So I don't accept 'auto-complete' as a reason it's not artificial intelligence. And that's not even mentioning the multiple studies ongoing on 'emergent behaviors' of LLMs with a large number of parameters.

And you can disagree with those things, or think it's all BS. But what really concerns me is there is almost a reddit-like, hivemind, mob mentality about the whole thing, where if anyone tries to point out something differently or point out positive things about the development of AI that's ongoing, they get shouted down and down voted into oblivion until their viewpoint is literally hidden. Where is the contrast of opinion, where is the open minded discussion? Why does it seem like a large part of the ARS community feels completely justified in sitting on their porch shaking their fist and yelling at AI or AI enthusiasts to get off their lawn?
because it's not AI it's just a rebadged chatbot with a larger dataset of everyones responses and it spews back out what we've fed into it. 10 minutes on Reddit and 4Chan and it'll break like the old MS Chatbot did... It's basically google for old people who don't know how to google. The only people impressed by this shit currently are the usual crowd of VC's chasing profits and etc it's a fad, just like how "Citizen Journalism" was a fad with everyone blogging... and well that died... because the simple fact is most people are boring as fuck and actually have nothing of interest to share with the rest of the world. The cycle of hype and outsized coverage and dumbass FOMO investors and idiots like with how everyone was screaming that blockchain would be the next revolutionary thing in logistics and how everyone is using it to track a fucking banana... No one uses blockchain for anything except dumbass cryptobros proclaiming it's going to be the next big thing for the last 10 years... Also internet streaming, everyone jumped on that and it's now down to Netflix, Amazon and Disney with a couple of the Asian players still struggling or relying on domestic legal protection to stay alive.

Oh yeah and remember when Godaddy wanted to get everyone to buy their own domains? Yeah no thanks...

ISP's trying to get into content... Remember Comcast buying NBC? Time Warner and AOL? The hilariously failed attempts at merging the pipes with the content....

Every single attempt at being the next big Facebook replacement thing... Twitter bought and killed Vine... Foursquare...

YAWN Call me when this shit actually works and isn't basically a FOMO fad.
 

Tom Foolery

Ars Legatus Legionis
13,783
Subscriptor
I use it, it's amusing and helps me do some things I can't do myself, but I feel we're training it to do the wrong things.

We should be developing AI to do the tedious, soul-crushing work, to free humans to create.
Instead, we're training AI to create, leaving humans competing to do the tedious, soul-crushing work to survive.
But we really don't need AI to do the tedious, soul-crushing work. Simple automation is good enough for most tedious tasks, I myself have an automated pool cleaner and a pair of Roombas that do a pretty good job taking care of the floors in my house and the swimming pool. Drip systems water the palms and the spices we grow, and some automation keeps my shops from becoming an oven every summer. I spend about two hours every fortnight on those things, whereas it would be triple that amount of time weekly if I were to do handle them manually. I mean, we can always do better with these things, but AI is not really needed. Just better automation.

I could see some industries opposing AI integration into all kinds of applications. Let's say, for example, that we were all able to afford our own AI robot assistant. How would such a robot be used? When we have a faucet that constantly drips, will we hire a plumber to replace the faucet, or get the robot AI assistant to handle it? How would the plumbers feel about this? This could be applied to the building industry in its entirety, the workforce tied to said industry is unreliable at best and the workmanship is totally hit-or-miss. What about convenience foods? What happens to fast food when an AI robot assistant can make better burgers than you can get at any fast food joint? Yeah, McD's will have long since replaced all of their workers with automation before then, but who would want one when you can get one made exactly the way you like it, every time by your personal AI robot assistant?

I love using AI to play around, I experiment with it and am working towards skilling up in AI and ML technically. As long as we keep the toolsets accessible to the general public, we should be fine.
 

Occam’s Blunt Razor

Smack-Fu Master, in training
6
The ethics around how the models are trained is the primary reason, as many have stated.

Also, I think there are posters still feeling burned by the promise (read: hype) around big data. I’m sure that at least one poor soul here received an edict from their CTO to move to the cloud… only for a new CTO to see through the hype and issue a new edict to go back to on-prem.

So while we have highly technical people around, they’re people at the end of the day. And anyone can attest to having their life upended on a whim is less than ideal.

Going back to the ethics for a quick second, I’m not too sure where I stand. Yes, AI is unethical for all the reasons put forward.

The deeper philosophical question that I struggle to answer is: “if you were offered the ability to ingest all of human knowledge and reproduce it with minimal effort, would you decline?”
 
  • Like
Reactions: bjn

papadage

Ars Legatus Legionis
41,731
Subscriptor++
I am more middle of the road. I strongly dislike publicly trained GenAI and LLMs because they are very uncontrolled and tend to produce junk.

On the other hand, my company jumped onto Copilot as a partner with MS, and we have had incredible results creating internal tools and client-facing features in our products. I work in Moody's, and our implementation of my group's (Moody's Insurance Solutions, specifically natural catastrophe analytics) products and services has given our clients the ability to:

  • Get great summaries of documentation and synthesize it into new formats. Clients can ask complex methodology questions and get step-by-step howtos based on a combination of our product docs, methodology docs, support case documentation, training programs, and contracts. The answer is tailored to client entitlements.
  • Get scripts, API snippets, and queries for joining and modifying data in complex ways.
  • Get client-ready deliverable reports based on client data and results in their analytics tenant.
  • Build complex dashboards with natural language.

None of that requires public training data, and all factual statements or assertions are linked to a source document proprietary to us. It's leagues ahead of publicly facing ChatGPT or Bard.

On the credit and financials side, it can produce almost professional-quality credit and risk memos for banks and other financial institutions and can update them automatically as the data feed grows over time with news, financial, and company relationships (ownership, shells, and supply chain).

This tells me that what makes GenAI valuable is a big lake of vetted, proprietary data that is already very structured, and the company discipline to dogfood it and roll out only proven uses to customers.

The Wild West hype train is for scam artists and gullible fools.