Wednesday, November 20, 2024

AI questions. Crabgrass has no answers. Crabgrass has impressions, about chasing a fool's errand.

Ars Technica: What if AI doesn’t just keep getting better forever? --New reports highlight fears of diminishing returns for traditional LLM training

Ars - New secret math benchmark stumps AI models and PhDs alike -- FrontierMath's difficult questions remain unpublished so that AI companies can't train against it.

Links re second item: here and here 

Not that any Crabgrass readers are super astute, just, stuff is out there and interesting.

 What Crabgrass sees is that a tremendous amount of money has been, so far, wasted on AI's LLMs, witnessed by far with Nvidia having the largest market cap value of ventures, and this thought-of imprevement to stuff where a simple concise sentence can be butchered by bad AI ideas. Microsoft spent a lot, got little, and is continuing at it whereas as an operating system Windows 11 without added copilot garbage is not too bad. Search is search, and it's getting garbaged up. Because they can. Not that it's a good idea, but they can so they do.

Tons have been spent on the Cloud, while this is being typed on Blogger with a barebones laptop, which for now is enough to search the Internet and read search returned items of interest. AI, so far, is an all talk no action failure of large proportion. So, obviously, sink more money into it and pray. That seems Microsoft's business plan, until they shake up management.

To upgrade the home station, a gaming laptop or desktop might be an upgrade, but necessary only if wanting to learn how to program a GPU, where  a single GPU in a machine cannot do much anyway where big server sites are needed to do very little.

Yes. Call me a Luddite. I can take it.