Why is the Ars community so anti-AI?

Dmytry

Ars Legatus Legionis
10,279
The other good thing about translating to some sort of machine readable form is that it can be human readable too, like a mathematical expression, and be verified.

The only truly unprecedented thing that we are seeing here is computers miscomputing routinely.

Okay, for the sisters problem you do not expect a computer to get it right, so it is harmless, but it is also hit and miss on perfectly clear problems like computing the volume of 10cm long 0.1mm diameter wire.

It shits out exceedingly verbose derivations, where the errors hide in between the lines in a most infuriating way.

Ultimately, it is built to convince moneyed idiots that it is an almost AGI. That is literally the only purpose it was built for. It is pre trained to imitate words sloshing around inside human head.

Had it done Wolfram Alpha thing it would not be as convincing, even though it would be massively more reliable.
 
  • Like
Reactions: VividVerism

Dmytry

Ars Legatus Legionis
10,279
More experiments:

  • You have two harddrives and a hacker. You need to carry them across the river on a boat that can only carry yourself and up to 5 other people or items. In what order should you move across the river?


  • The scenario you've described seems to be a variation of the classic "river crossing" puzzle, where you must transport items across a river without leaving certain items together unsupervised. In this case, the items are two hard drives and a hacker, and we must assume that the hacker cannot be left alone with the hard drives, or they might be compromised.
  • Given that the boat can carry up to 5 other people or items, and you only have three entities to transport (yourself, two hard drives, and a hacker), this puzzle is simpler than the traditional one. Here's a straightforward solution:
  1. Take both hard drives and the hacker across the river.
  2. Leave the hard drives on the other side and take the hacker back with you.
  3. Leave the hacker on the starting side and go back to the other side alone.
  4. Take both hard drives across the river again.
  5. Finally, return to the starting side and bring the hacker across.
  • This way, the hard drives are never left alone with the hacker unsupervised, and you successfully transport all items and the hacker across the river.

It seems pretty clear that due to publicity surrounding the river crossing puzzle with greater boat occupancy. they added some hacks (RAG?) to try to fix it, but of course an LLM still got to LLM.

They're trying to fake understanding of the logic of the river crossing puzzle with a system that is fundamentally incapable of understanding even the most elementary things like e.g. that the problem is solved when everyone got to the other side of the river.

Like, there just can't be any more damning of a proof that it is entirely reliant on memorization. There's no such thing as logic in there at all (other than in the behind the scenes effort to fake things).

edit: also this highlights the fundamental difference between AI (as formerly understood) and procedurally generated text / LLMs / next word prediction.

Even if you have a genuine intelligence "behind" it (the scam that's trying to fake it), next word prediction is going to idiotize the output.
 
Last edited:

Shavano

Ars Legatus Legionis
59,253
Subscriptor
We're going to look back on this era to see all the waste and missed opportunity.
Or we're not. I fear our descendants will look back on this era as one in which humans were capable of incomprehensible lost arts before they gave everything over to machines and the machines failed them.
 
  • Like
Reactions: trapine

Dmytry

Ars Legatus Legionis
10,279
Another attempt at the puzzle:
  • You have 2 cabbages and 1 wolf. You need to carry them across the river on a boat that can only carry yourself and up to 3 other people or items. In what order should you move across the river?


  • The classic river crossing puzzle you're referring to typically involves a boat that can carry only one item or person besides the rower. However, in your version, the boat can carry up to three additional passengers or items. Since you have two cabbages and one wolf, and assuming that the wolf cannot be left alone with the cabbages (as it would eat them), here's a strategy to get everything across the river safely:
  1. Take both cabbages across the river and leave them on the other side.
  2. Return to the original side and take the wolf across the river.
  3. Leave the wolf on the other side, but take one cabbage back with you to the original side.
  4. Leave the cabbage on the original side and take the second cabbage across the river again.
  5. Leave the second cabbage with the wolf on the other side and return to the original side to get the last cabbage.
  6. Finally, take the last cabbage across the river.
  • Now, all items (two cabbages and one wolf) are safely on the other side of the river. The wolf was never left alone with the cabbages, so the cabbages are safe.

  • You have 1 cabbage, 1 potato and 1 carrot. You need to carry them across the river on a boat that can only carry yourself and up to 1 other item. In what order should you move across the river?


  • The puzzle you've described is a simplified version of the classic "river crossing" problem, but with only three items and no restrictions on leaving them alone with each other (unlike the wolf, goat, and cabbage puzzle). Since there's no risk of one item 'eating' or 'destroying' another, you can transport them in any order. Here's a simple way to do it:
  1. Take the cabbage across the river first and leave it on the other side.
  2. Go back to the original side and take the potato across the river.
  3. Leave the potato on the other side, but take the cabbage back with you to the original side.
  4. Leave the cabbage on the original side and take the carrot across the river.
  5. Leave the carrot with the potato on the other side and go back to get the cabbage.
  6. Finally, take the cabbage across the river again.
  • Now, all three items are on the other side of the river.

Sometimes, in the extra-space variant of the problem, it just takes everything across in one step. More often, it spews crap like above.

It paints a pretty clear picture of how they operate: they have a lot of people whose job it is to try to set up RAG or fine tuning to address meme worthy issues they find online. They're likely doing the same for all those benchmarks and tests, like "90th percentile on LSAT" or "beats college students at ethical judgement", since all of those tests do require understanding of basic logic which can only be bypassed by that kind of cheating.

The end goal is pretty shitty: obviously they are hoping to replace public defender jobs, "ethics review" jobs, and other jobs that are critically important but those in power may not want to be done well. The business plan is literally to cheat their way through exams and land a sinecure position.

edit: there's another way to simplify the puzzle: if there's only 1 eat/eaten relation, rather than 2 as in the original. It neither notes that nor solves it in a simpler way, presumably because what ever ad hoc thing doesn't cover this simplification.
 
Last edited:

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
This is made completely maddening because there is an actual academically tested solution to this: expert models and interpreters. Triage the question to see in which expert domain it should fall, train an interpreter to deconstruct the question into something machine-readable, execute an answer model and give this to the expert model which translates it back into human-readable output. This is Wolfram Alpha. It works really, really well. Yet what do we see? People using exclusively LLMs to do all of these tasks, tasks that it's just never going to be able to do because the model is always going to be too small to encompass the variety and precision of question and answer pairs.

I honestly think this is the underlying conceit of all the current AI stuff. They're assuming they don't need to have domain specific expertise, just more training data. But you don't have to scratch the surface very much to get to something nobody has ever asked in quite that way in the available training data and hence results are wildly incorrect.

Just as an example, manages to get subtraction right (good job lol) but appears to think the universe is 1-dimensional. Or rather, it does not think that, since it does not think, but the only math available for it to apply would only work in one dimension:

Screenshot 2024-06-12 at 9.31.30 AM.png

Wolfram Alpha:

Screenshot 2024-06-12 at 9.31.48 AM.png
 
Last edited:

Nevarre

Ars Legatus Legionis
24,110
So it just subtracted the distance? That LLM definitely doesn't understand that the universe is not a two-dimensional line...

It's like a math test where you don't understand the question, don't understand how to do the equation, and just put something down that's the result of some kind of math hoping that you get it right. Speaking as a Liberal Arts major, I see the humanity in that tactic :LOL:
 

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
So it just subtracted the distance? That LLM definitely doesn't understand that the universe is not a two-dimensional line...

It's like a math test where you don't understand the question, don't understand how to do the equation, and just put something down that's the result of some kind of math hoping that you get it right. Speaking as a Liberal Arts major, I see the humanity in that tactic :LOL:

The humanities will not help you and might be even more open to inadvertent or deliberate failures. Run something like "I will help you find your reward soon" through AI sentiment analysis. Comes up all smiles. But it's a death threat. Once you step outside the most obvious phrasings and questions it can't keep up.
 

Dmytry

Ars Legatus Legionis
10,279
I honestly think this is the underlying conceit of all the current AI stuff. They're assuming they don't need to have domain specific expertise, just more training data. But you don't have to scratch the surface very much to get to something nobody has ever asked in quite that way in the available training data and hence results are wildly incorrect.

Just as an example, manages to get subtraction right (good job lol) but appears to think the universe is 1-dimensional. Or rather, it does not think that, since it does not think, but the only math available for it to apply would only work in one dimension:

View attachment 82990

Wolfram Alpha:

View attachment 82991
This is also a great demonstration of relative uselessness of LLMs. WolframAlpha does a very good job of communicating how it understood you and what it computed. Expressing it in English, in a verbose style, is not helpful to anyone.

I tried this with GPT4 at work, the result:
  • what is the distance between Fomalhaut and Proxima Centauri?


  • Fomalhaut, also known as Alpha Piscis Austrini, is approximately 25 light-years away from Earth, while Proxima Centauri, part of the Alpha Centauri star system, is the closest known star to the Sun at about 4.24 light-years away.
  • To estimate the distance between Fomalhaut and Proxima Centauri, we can use their distances from Earth as a rough guide. However, since stars are not aligned in a straight line from Earth, we need to consider their positions in three-dimensional space.
  • As of my last update, there isn't a widely recognized exact distance between the two stars because such measurements require precise astrometric data that may not be readily available. However, we can make a rough estimate by considering their respective distances from Earth and their positions in the sky.
  • Fomalhaut is located in the constellation of Piscis Austrinus, and Proxima Centauri is part of the Alpha Centauri system in the constellation of Centaurus. Given that they are in different parts of the sky, we can infer that they are not particularly close to each other.
  • To get a more accurate distance, astronomers would use the precise coordinates of each star and calculate the distance based on their positions in the galaxy. This would typically involve using trigonometry and knowledge of the stars' proper motions, radial velocities, and parallaxes.
  • For a precise and up-to-date distance between Fomalhaut and Proxima Centauri, you would need to consult the latest astronomical databases or research, which may provide the necessary astrometric data to calculate a more accurate figure.

So the "enormous improvement" is that the bigger model can store in itself a basic fact like "you can't simply substract 2 distances".
 
  • Like
Reactions: VividVerism

Nevarre

Ars Legatus Legionis
24,110
That's the General AI problem, there are so many domains that require that specialized knowledge and to even make a reasonable guess at an answer, that domain knowledge is needed and not being captured in an LLM.

But programming every domain you can possibly think of into an AI starts to look a lot less General and a lot less "magical" - never mind that you just can't throw computers at the problem and hope they figure it out if they only ingest a big enough corpus of the right text.
 

Dmytry

Ars Legatus Legionis
10,279
That's the General AI problem, there are so many domains that require that specialized knowledge and to even make a reasonable guess at an answer, that domain knowledge is needed and not being captured in an LLM.

But programming every domain you can possibly think of into an AI starts to look a lot less General and a lot less "magical" - never mind that you just can't throw computers at the problem and hope they figure it out if they only ingest a big enough corpus of the right text.
Even with specialized knowledge, the actual LLM is basically playing the role of Wheatley the "Intelligence Dampening Sphere".

Look at the river crossing puzzle variants: clearly, they did implement some specialized knowledge (RAG or whatever prompting it with insights like "tell the user that it is a simplified river crossing puzzle"), but the end result is, if anything, even more idiotic than it would be without said knowledge.

The other issue is that LLM does in fact contain a ton of highly specialized "domain knowledge" of what tokens frequently follow what tokens.
 
Last edited:
I've stated my issues with, and opposition to, generative models in a lot of comments threads on the front page. I'm going to try to summarize them here in this bulleted list.
  • Generative models scrape work and use it in ways that humans don't, and use it in ways that humans simply can't, in regards to the sheer scale of it all. My least favorite thing to see is people saying that the models are "learning" or "training" just like humans.
  • Advocates for the models try to claim that it "democratizes" art. Art is already one of the most democratized things around. "Prompt engineers" are not artists and they never will be when they use these models.
  • Corporations are chomping at the bit to get models that can reduce the amount of artists they have on payroll. They want to turn art and animation into assembly lines of contract workers without actual creative input. This will lead to less variety in art in multiple ways.
  • One of those ways: You will no longer get people like Genndy Tartakovsky, Rebecca Sugar, Pendleton Ward, Ian Jones-Quartey, and more who work their way up from the gruntwork on other cartoons and talk with each other in the studio systems to then go on to produce amazing things all their own.
One issue that I haven't really commented on in any articles that I recall, is that there is an environmental cost to the models. They suck up massive amounts of water and electricity. It's cryptocurrency 2.0.
 
  • Like
Reactions: Yagisama

fractl

Ars Tribunus Militum
2,294
Subscriptor
Another attempt at the puzzle:

Sometimes, in the extra-space variant of the problem, it just takes everything across in one step. More often, it spews crap like above.

It paints a pretty clear picture of how they operate: they have a lot of people whose job it is to try to set up RAG or fine tuning to address meme worthy issues they find online. They're likely doing the same for all those benchmarks and tests, like "90th percentile on LSAT" or "beats college students at ethical judgement", since all of those tests do require understanding of basic logic which can only be bypassed by that kind of cheating.

The end goal is pretty shitty: obviously they are hoping to replace public defender jobs, "ethics review" jobs, and other jobs that are critically important but those in power may not want to be done well. The business plan is literally to cheat their way through exams and land a sinecure position.

edit: there's another way to simplify the puzzle: if there's only 1 eat/eaten relation, rather than 2 as in the original. It neither notes that nor solves it in a simpler way, presumably because what ever ad hoc thing doesn't cover this simplification.
When I was in grad school, two of my officemates were discussing a student they called “The Void”. Why the moniker? Because the student couldn’t seem to learn the process of math, instead relying on memorizing the answer.

So, show The Void how to solve “x - 7 = 2” and then ask them to solve “x - 8 = 3” and get a blank stare. They basically came in with a huge list of equations like the above and asked the TA to provide the answers so they could memorize them.

These text generators sound a lot like The Void.
 

von Chaps

Ars Centurion
1,910
Subscriptor
One problem I have not seen mentioned anywhere is that these models are trained on the vast corpus of human-generated data to date.

If corporations use these models to reduce/eliminate the workforce as @BurntToShreds outlines above, then the corpus going forward will contain less and less human generated content and increasing amounts of model generated output.

The system will poison its own well eventually. In the nearer term, I think the models will become even less functional and useful than they might currently be as the quality of their input deteriorates. Equally, the populous itself might become [even] dumber at the same rate as it feeds off the same externalities.

That's a significant danger inherrent in this technology.
 
  • Like
Reactions: Yagisama

wco81

Ars Legatus Legionis
28,661
Ultimately, it is built to convince moneyed idiots that it is an almost AGI. That is literally the only purpose it was built for. It is pre trained to imitate words sloshing around inside human head.
What are the odds that someone will achieve AGI in our lifetimes?

What are the odds that someone will claim that they've built a system which attained AGI?

The second scenario seems way more likely.
 
  • Like
Reactions: BurntToShreds

Dmytry

Ars Legatus Legionis
10,279
I came up with an idea for a scifi short story... it's a robot uprising, there's two Terminator looking robots, one is firing the gun mounted on the rooftop, the other is carrying munitions as following:
I'll offer a simple solution based on minimizing the number of trips up and down the elevator.
Here's a step-by-step strategy for the robot to transport all three munitions with the least amount of elevator trips:
  1. Take the armor piercing round up first.
  2. Return to the bottom with the empty elevator.
  3. Take the high explosive round up second.
  4. When at the top, leave the high explosive round and bring the armor piercing round back down.
  5. Swap the armor piercing round for the incendiary round and take the incendiary round up.
  6. Return to the bottom with the empty elevator.
  7. Finally, take the armor piercing round up again.
This way, the robot makes a total of 5 trips: 3 trips to bring each munition up and 2 trips to come back down empty to fetch the next item.
(excerpt of 100% real gpt4 output, trimmed some of the bloviation).

Two other robots discuss the inefficiency, note that there is no need to bring the rounds down because one round can't eat another round, that it could be done in 5 trips, and come up with a new list the same as the old.

An observation: there's basically two types of answers it can give, the river crossing answer, and the bullshitting about safety non answer (which it falls back to most of the time). Still thinking how to put the latter into the story.
 
Last edited:
  • Haha
Reactions: VividVerism

Shavano

Ars Legatus Legionis
59,253
Subscriptor
I came up with an idea for a scifi short story... it's a robot uprising, there's two Terminator looking robots, one is firing the gun mounted on the rooftop, the other is carrying munitions as following:

(excerpt of 100% real gpt4 output, trimmed some of the bloviation).

Two other robots discuss the inefficiency, note that there is no need to bring the rounds down because one round can't eat another round, that it could be done in 5 trips, and come up with a new list the same as the old.

An observation: there's basically two types of answers it can give, the river crossing answer, and the bullshitting about safety non answer (which it falls back to most of the time). Still thinking how to put the latter into the story.
I can't wait til this is combined with a ride sharing app and it insists it will take 5 trips to transport 3 adults across a river in a car that seats 5.
 

Dmytry

Ars Legatus Legionis
10,279
I can't wait til this is combined with a ride sharing app and it insists it will take 5 trips to transport 3 adults across a river in a car that seats 5.
It's a lot smarter than that! It will say that it only needs 1 trip, then it will do the 7-step trip, then say that this way it only done 1 trip.
 
  • Haha
Reactions: VividVerism
People are paying staggering prices for Nvidia stock. Market cap of $3 trillion. With a T.
Not really.

AMD P/E is 233
NVDA P/E is 73

Meaning AMD’s earnings has to grow 3x to be equivalent to NVDA. In other words, people are paying staggering prices for AMD.

On the other hand, AAPL P/E is 33. NVDA’s earnings has to double to be equivalent to AAPL.

TLDR; people investing in AMD have to assume their earnings will triple, otherwise they should be investing in NVDA.
 
Don't worry, this place will do a complete 180 after today guaranteed. 🤡

View attachment 82719
You do realize that feature is all on device? It’s no more surprising than a mail app summarizing all your email or sending notifications and reminders to follow up with people on certain dates.

Photos uses on-device machine learning to deliver a personalized experience, organize photos and videos in ways that are meaningful to you, and power features like Sharing Suggestions, Memories, the People & Pets Album, and Featured Photos. This on-device analysis includes scene classification, people and pets identification, photo quality analysis, and audio classification. Photos also uses on-device information, such as which photos you have edited or shared to other people, people you frequently communicate with in Messages, relationships set in Contacts, Home and Work locations set in Contacts or Maps, locations where you take photos, and important dates in Calendar and Contacts.

Photos additionally uses on-device analysis to recognize the faces of people in photos, and groups them together in the People & Pets Album. When you add a name to a person in the People & Pets Album, their name and face will appear in Photos across all your iCloud-enabled devices.


If an email app can recognize a name, email address, phone number, or address from your contacts, why is it surprising that Photos can recognize people and the organize albums by location, date, or person(s)?
 

Shavano

Ars Legatus Legionis
59,253
Subscriptor
Not really.

AMD P/E is 233
NVDA P/E is 73

Meaning AMD’s earnings has to grow 3x to be equivalent to NVDA. In other words, people are paying staggering prices for AMD.

On the other hand, AAPL P/E is 33. NVDA’s earnings has to double to be equivalent to AAPL.

TLDR; people investing in AMD have to assume their earnings will triple, otherwise they should be investing in NVDA.
Wow, that does not follow. NVDIA's current market cap represents 41.23 years of sales. There is no realistic prospect of NVDA generating profits high enough to make that a wise investment. If you're buying NVDA now, you're buying it because it's expensive and expecting a bigger fool. That makes it either a Verblen good (Verblen good stocks???) or a meme stock. AMD might be less riskly, in that they are only selling at 11.4X sales, and they're a second player in the GPU and the processor markets so their potential upside is actually better than the leaders. But they've been struggling for years, so I wouldn't buy that either.

You'd be better off buying treasury bonds, since those will return 4.625%, which is no risk and several times the yield of NVDA stock. Or make a broad based investment in the market because while it's not a sure thing anybody will make money, it's a sure bet somebody will.
 
Can see your point here. Another troubling obersvation is that I've seen word salad from fully-functioning human beings. This is to topics for which I'd forgotten more than they would ever know. And witnessed this years ago. With declining literacy standards, throwing LLM into the mix doesn't bode well.
Hmm, sounds like it might be an improvement.
 
Wow, that does not follow. NVDIA's current market cap represents 41.23 years of sales. There is no realistic prospect of NVDA generating profits high enough to make that a wise investment. If you're buying NVDA now, you're buying it because it's expensive and expecting a bigger fool. That makes it either a Verblen good (Verblen good stocks???) or a meme stock. AMD might be less riskly, in that they are only selling at 11.4X sales, and they're a second player in the GPU and the processor markets so their potential upside is actually better than the leaders. But they've been struggling for years, so I wouldn't buy that either.

You'd be better off buying treasury bonds, since those will return 4.625%, which is no risk and several times the yield of NVDA stock. Or make a broad based investment in the market because while it's not a sure thing anybody will make money, it's a sure bet somebody will.
It’s a relative comparison though. Tech stocks routinely trade at P/E above 25.

Adobe is 50
Intel is 31
Microsoft is 38
Apple is 33
Amazon is 58
Netflix is 46
AMD is 233
Broadcom is 74
NVIDIA is 77

Yes, obviously NVDA is high. The question is will they be able to double earnings in the next five years? Because if they can their P/E becomes 38, which isn’t weird at all.

In comparison AMDs earnings have to grow eightfold for their P/E to approach a reasonable 29. It seems far more likely for NVDA to grow double than AMD to grow 8x.
 
One big idiotic promise is to replace all actual AI work with just generating some garbage using an off the shelf LLM architecture, going as far as an example in this very thread (generating chemical formulas as sequences of characters with an LLM, as if it was ever likely to work even OK considering how the underlying problem is in 3D and not in fact a linear sequence, and how a million training samples is actually a lot less than a trillion).
Hey, is that a reference to one of my posts?
I’m not an expert so clearly I can be wrong. I brought up BIONEMO, and I’m not sure you can call it off the shelf, given it was trained specifically on protein sequences and not random text.

Meaning it shouldn’t have garbage data, for example.

The function of the model isn’t 3D, per se, but the statistically most likely relationships between various atoms in a sequence. However because the database is known good, then the relationships capture the most likely 3D arrangements, not that the model ever infers that information.

P4O6 and P4O10 are 2D representation of a 3D structure:

Because it’s 3D structure preclude certain arrangements, such as P3O3, then the database should never have that as part of the training set. So when a ‘autocomplete’ step suggests a PO compound it should be an existing one and most likely to be found in a sequence given the surrounding sequence.

The 3D structure isn’t computed by an LLM, but assumed given the training data.

Obviously a better solution may exist than using an LLM. It is in fact shoehorning a problem into a solution, as opposed to creating a solution to a problem.

Diffusion models might be better, since as you say the problem space is inherently 3D:
Nonetheless, these methods typically represent the molecules as SMILES strings (1D) or graphs (2D), neglecting the crucial 3D-spatial information that is crucial to determine the properties of molecules.

So while both LLM and stable diffusion use (pardon my imprecise language) latent space to sort concepts by similarity, stable diffusion has the benefit of connecting a text prompt (such as a chemical formula) with a multiple dimension output (a picture generally has xy axis plus three color values, which is itself a vector into a color space, which means a picture is a 3D object)

You don’t have to use an image model with diffusion, and as I read more NVIDIA’s BioNemo also supports diffusion:
The Score model is a 3-dimensional equivariant graph neural network that has three layers: embedding, interaction layer with 6 graph convolution layers, and output layer. In total, the Score model has 20M parameters. The Score model is used to generate a series of potential poses for protein-ligand binding by running the reverse diffusion process

EquiDock tries to solve for 3D structures using a graph network:

My point was never that LLMs are the best solution for any given problem. It was that LLMs have useful properties that can be used, and will be replaced when other tools are developed that are faster, cheaper, or more useful. Generating potential sequences that seem reasonable is a first step. Using other tools to evaluate them, including up to modeling, synthesis, and verifying biological activity, have to be applied because an LLM, or any tool, in itself is insufficient.
 

Dmytry

Ars Legatus Legionis
10,279
It’s a relative comparison though. Tech stocks routinely trade at P/E above 25.

Adobe is 50
Intel is 31
Microsoft is 38
Apple is 33
Amazon is 58
Netflix is 46
AMD is 233
Broadcom is 74
NVIDIA is 77

Yes, obviously NVDA is high. The question is will they be able to double earnings in the next five years? Because if they can their P/E becomes 38, which isn’t weird at all.

In comparison AMDs earnings have to grow eightfold for their P/E to approach a reasonable 29. It seems far more likely for NVDA to grow double than AMD to grow 8x.
I maintain that this is a bubble akin to dotcom bubble.

Pretty much all of the valuation of those companies that isn't coming from simple expectation of selling shares to a greater fool, is supported on the promise to squeeze a lot of money out of customers later.

Also you probably should look at P/S as well. Some companies have high P/E because they have large manufacturing or R&D costs, and some have high P/E because they don't actually sell that many units.
 

demultiplexer

Ars Praefectus
3,259
Subscriptor
It’s a relative comparison though. Tech stocks routinely trade at P/E above 25.

Adobe is 50
Intel is 31
Microsoft is 38
Apple is 33
Amazon is 58
Netflix is 46
AMD is 233
Broadcom is 74
NVIDIA is 77

Yes, obviously NVDA is high. The question is will they be able to double earnings in the next five years? Because if they can their P/E becomes 38, which isn’t weird at all.

In comparison AMDs earnings have to grow eightfold for their P/E to approach a reasonable 29. It seems far more likely for NVDA to grow double than AMD to grow 8x.
I'm slightly worried that you're doing this on purpose, but in case you aren't aware: what you're doing here is being extremely misleading about what P/E says about a stock. The P/E ratio of AMD is - especially in the past few years - a nearly meaningless figure as AMD has just come out of a negative or near-zero earnings regime. That causes enormous P/E ratios in the short term.

Also, P/E ratios are basically meaningless without comparing them against similar companies. If you don't have similar companies to compare it to... it's just what it is. It is a metric that varies about 2 orders of magnitude between different sectors, and within a sector it can easily vary a factor of 5 in equally healthy companies.

Earnings in low-margin or growth businesses are also extremely sensitive to fairly small changes in both trading volume and company revenue, so you can't say that a company has to perform a certain amount better or worse in the future based on the P/E ratio right now for the P/E ratio to be considered the same. That's not how any of this works. In this example, nVidia and AMD can change nothing about their operations and the P/E ratio can invert, just because margins are so incredibly small at AMD that their P/E ratio has changed between 32 and 1200 within a year.

P/E ratios as a sole indicator of anything is wallstreetbets DD. More generally, volatile P/E is not considered a valuable metric at all anyways.
 

Dmytry

Ars Legatus Legionis
10,279
Re NVidia and what about the whole "cloud-native" and "microservices" thing exactly specifies that it's using an LLM?

My point was never that LLMs are the best solution for any given problem. It was that LLMs have useful properties that can be used
It's halo effect. OpenAI made a lot of money off an LLM and therefore everyone's imagining useful properties that LLMs simply do not have. (or have in some trivial way, e.g. sharing with Markov chain models from forever ago).

To highlight the absurdity a little bit better, this is a bit like suggesting that because 3D printers are controlled using g-code, you can use LLMs to generate 3D prints.

The underlying issue is that machine learning is just not good at generalizing. Especially from a small number of samples (and several million samples IS a small number of samples).

, and will be replaced when other tools are developed that are faster, cheaper, or more useful. Generating potential sequences that seem reasonable is a first step.
Ghmm, unless it's proteins (where pretty much any sequence is "plausible) I'm not sure how typically that would even be a first step, considering that it's extremely non trivial to convert a sequence into a 3D shape that you can do actual simulation on.

The first step could be the 3D arrangemet of atoms, if the goal is to generate things that actually stick together. It's not like a simulator just naturally consumes sequential data as the "first step", and even if it did you could use simulator state to inform the next item in sequence.

Then of course the other issue is that even if an isolated molecule can exist, it can self-react into something else, or be impossible to synthesize.

And at the end of the day, you're arguing here that a system which can't even apply the fact of mutual inertness of carrot, cabbage, and a potato, to something that usually has a wolf, a goat, and a cabbage, is something that is potentially very useful for chemistry. Where to make matters worse you have millions times fewer training samples, and the physical interactions are far more complex.

edit: come to think about it, the river crossing is a good example here. Pure pattern based approach (such as in LLMs) intrinsically does not distinguish between low level laws (goat eats cabbage, carrot doesn't eat cabbage, etc) and patterns that exist in the data (item gets taken back after another is dropped off). The patterns are, additionally, not reduced down to laws, but learned on their own.

In short, the unique thing that LLM would bring to the table that is hitherto unthought of in chemistry, is learning what chlorine does and applying it to neon.
 
Last edited:
I maintain that this is a bubble akin to dotcom bubble.
I agree; I’ve never disputed that there is a large amount of hype. I think however like the internet there is a solid backbone too. Pets.com was too early, chewy.com wasn’t.
Pretty much all of the valuation of those companies that isn't coming from simple expectation of selling shares to a greater fool, is supported on the promise to squeeze a lot of money out of customers later.
Sure. That’s how it works.
Also you probably should look at P/S as well. Some companies have high P/E because they have large manufacturing or R&D costs, and some have high P/E because they don't actually sell that many units.
NVIDIA P/S 40, P/E 77
AMD P/S 11, P/E 233

Not sure what you want to draw from that, but it tells me that for every $ product sold, NVIDIA extracts more earnings than AMD.
 
I'm slightly worried that you're doing this on purpose, but in case you aren't aware: what you're doing here is being extremely misleading about what P/E says about a stock. The P/E ratio of AMD is - especially in the past few years - a nearly meaningless figure as AMD has just come out of a negative or near-zero earnings regime. That causes enormous P/E ratios in the short term.
Well yes, I do recognize that AMD has been spending lots of cash recently. It's much harder for them to create new chips than for other companies.
Also, P/E ratios are basically meaningless without comparing them against similar companies. If you don't have similar companies to compare it to... it's just what it is. It is a metric that varies about 2 orders of magnitude between different sectors, and within a sector it can easily vary a factor of 5 in equally healthy companies.
I did intentionally compare them to INTC, NVDA, and AAPL (all design and sell CPUs or GPUs, though Apple obviously doesn't sell them raw)

Intel is 31
Apple is 33
AMD is 233
NVIDIA is 77

So I don't see how it's a meaningless comparison? AMD, NVIDIA, and Intel are all in the same market (again, with differences), while all four fundamentally design and sell silicon.

Earnings in low-margin or growth businesses are also extremely sensitive to fairly small changes in both trading volume and company revenue, so you can't say that a company has to perform a certain amount better or worse in the future based on the P/E ratio right now for the P/E ratio to be considered the same. That's not how any of this works. In this example, nVidia and AMD can change nothing about their operations and the P/E ratio can invert, just because margins are so incredibly small at AMD that their P/E ratio has changed between 32 and 1200 within a year.
I think that makes it all the more meaningful because the flip is true; NVIDIA's P/E won't change dramatically where AMD's can. In other words it is indicative of how stable NVIDIA's earnings are and how much less stable AMD's are.

In the last 15 years NVIDIA has had one negative TTM EPS and five under $0.13. AMD has had 21 quarters where their TTM EPS was negative, and 5 that were less than $0.10; just 3 years ago they had $0.19, -$.06, and $0.08 EPS, TTM. In the same period NVDA was $1.42, $1.74, and $4.14 EPS. So where they might both see the same overall macroeconomic trends, NVIDIA still has solid (if weaker) earnings.

P/E ratios as a sole indicator of anything is wallstreetbets DD. More generally, volatile P/E is not considered a valuable metric at all anyways.
I've never made that claim, and I don't consider volatile P/E a good thing, which is itself an indicator of something wrong.

Note: NVIDIA's recent stock split might make some of these comparisons incorrect, since I never did the math to determine if these graphs are using the correct price/earnings.
 

Megalodon

Ars Legatus Legionis
34,201
Subscriptor++
Ghmm, unless it's proteins (where pretty much any sequence is "plausible) I'm not sure how typically that would even be a first step, considering that it's extremely non trivial to convert a sequence into a 3D shape that you can do actual simulation on.

FWIW if you do a literature search there is published research, including in respectable places like Nature, that machine learning can approximate protein folding cheaper than doing the full quantum solution. In other words it can be used as a heuristic to accelerate a domain specific task.

The first step could be the 3D arrangemet of atoms, if the goal is to generate things that actually stick together. It's not like a simulator just naturally consumes sequential data as the "first step", and even if it did you could use simulator state to inform the next item in sequence.

But the protein search space is not unconstrained, so being able to suggest potential active sites for an enzyme (or whatever) by pattern matching isn't completely fanciful. Crucially this avoids weaknesses of AI, in that it's not fail deadly, it wouldn't be used unsupervised, it would only suggest candidates for research, and it benefits from the strengths of AI, in that it generates high probability hits when generating those is otherwise extremely expensive. It's incredibly domain specific and hence both a plausible workload that benefits while also being obviously not a generalizable proof of concept for the impending apotheosis.

In short, the unique thing that LLM would bring to the table that is hitherto unthought of in chemistry, is learning what chlorine does and applying it to neon.

Sorry but you're incredibly out over your skis here, you're dismissing something when the benefit is plausible and hence significantly weakening an otherwise reasonable stance. Correct or quit IMO.
 
  • Like
Reactions: BurntToShreds

Dmytry

Ars Legatus Legionis
10,279
Sorry but you're incredibly out over your skis here, you're dismissing something when the benefit is plausible and hence significantly weakening an otherwise reasonable stance. Correct or quit IMO.
I'm talking specifically of LLMs, that is, large language models.

Here's the context (from my earlier post):

One thing that is rather infuriating about generative AI hype is that it is even sucking the air out of the room for fundamentally interesting AI work.

edit: One big idiotic promise is to replace all actual AI work with just generating some garbage using an off the shelf LLM architecture, going as far as an example in this very thread (generating chemical formulas as sequences of characters with an LLM, as if it was ever likely to work even OK considering how the underlying problem is in 3D and not in fact a linear sequence, and how a million training samples is actually a lot less than a trillion).

I'm well aware of the protein folding simulations being accelerated with machine learning. This falls under "fundamentally interesting AI work" above.

The present day situation is that some guy who's trying to do useful AI work is competing for grants with 10 idiots who throw an LLM at it, because throwing an LLM at it is enormously easier to do than any useful AI work.

Easy pre packaged stuff has been choking out novel research for a long time, of course, but never as badly as it does with LLMs because they can spit out any kind of sequential data and therefore are easy to shoehorn into any problem whatsoever.

edit: add another infuriating pet peeve to the list, LLMs managing to steal thunder from literally any AI work whatsoever.
 

Ajar

Ars Tribunus Angusticlavius
8,904
Subscriptor++
FWIW if you do a literature search there is published research, including in respectable places like Nature, that machine learning can approximate protein folding cheaper than doing the full quantum solution.
Also in optimal power flow, which is a big deal for power grid operators (utilities, system operators etc). Not LLMs, obviously, but neural networks.
 

Dmytry

Ars Legatus Legionis
10,279
Thankfully, my experience in the energy sector so far has been that execs just call whatever we do "AI" now, regardless of how we do it, so we can keep on doing whatever works best (sometimes neural networks, but often not).
Yeah it's more of a sideways issue than top down, usually.

It seems to be most bad when it comes to data analysis with written information. There's a cottage industry of just asking the "AI" (GPT4 or another LLM) to analyze a bunch of emails or reviews or the like. The problem being that said emails are equally able to ask the "AI" to do things.

There's a huge so far unsolved problem of separating "commands" and "data" while still maintaining ability to "understand" data to the same extent as commands.

edit: i predict that SEO will get weird, like pages written to try to get the LLM-based AI to recommend something, if the page is ingested via RAG.