

Is there a difference, besides SSDs tending to be plugged-in all the time? Maybe better firmware?


Is there a difference, besides SSDs tending to be plugged-in all the time? Maybe better firmware?


So… an SD card?


Are you sure? Check.
Where you jumped in is me, pointing out, repeatedly, that LLMs and IT have nothing to do with the actual article. Y’know, the doctors I keep mentioning? They’re not decorative.


You literally did.
“Concerning that the same is happening in medical even for the experts.”


No. You’re making a faulty comparison. The thing in this article is exclusively for experts. Using it made them better doctors, but when they stopped using it, they were out-of-practice at the old way. Like any skill you stop exercising. Especially at an expert level. Your junior programmers incompetently trusting LLMs is not the same problem in any direction.
This is genuinely important, because people are developing prejudice against an entire branch of computer science. This stupid headline pretends AI made cancer detection worse. Cancer’s kind of a big deal! Disguising the fact that detection rates improved with this tool, by fixating on how they got worse without it, may cost lives.
A lot of people in this thread are theatrically advocating the importance of deep understanding of complex subjects, and then giving a kneejerk “fuckin’ AI, am I right?”


Some guy blogged that the smart ones move to advertising.


Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.
So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.


Paragraph one says things getting better is bad because what if we stop.
Paragraph two is bemoaning the abacus for ruining mental math.
Paragraph three blames a new gizmo for the system as it exists.


We’re not talking about LLMs.
These doctors didn’t ask ChatGPT “does this look like cancer.” We’re talking about domain-specific medical tools.


This year they stopped fucking trying. You search three words and it ignores two of them. “We didn’t find many results for that.” Yeah! That’s why I wrote it that way! I didn’t type “colossus of argyle” because I wanted a thousand generic pages about a statue. Gimme some damn prog.


Which does cause problems now that Google search is shit.
Every time ‘new tool makes old skills rusty’ is treated as novel, I’m reminded of The Gentleman’s Magazine:
Instead of simply reproducing the operations of man’s intelligence, the arithmometer relieves that intelligence from the necessity of making the operations. Instead of repeating responses dictated to it, this instrument instantaneously dictates the proper answer to the man who asks it a question. It is not matter producing material effects, but matter which thinks, reflects, reasons, calculates, and executes all the most difficult and complicated arithmetical operations with a rapidity and infallibility which defies all the calculators in the world. The arithmometer is, moreover, a simple instrument, of very little volume and easily portable. It is already used in many great financial establishments, where considerable economy is realized by its employment.
It will soon be considered as indispensable, and be as generally used as a clock, which was formerly only to be seen in palaces, and is now in every cottage.
This was a crank-powered adding machine. Numbers used levers instead of buttons because buttons hadn’t been invented yet. There were already people who expected it the next version would do everything for us - and people who thought that would be bad, somehow.


Should urologists still train to detect diabetes by taste? We wouldn’t want the complexity of modern medicine to stunt their growth. These quacks can’t sniff piss with nearly the accuracy of Victorian doctors.
When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today can’t use scythes worth a damn. Programming in assembly is frivolous.
At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? It’s not like these doctors lost or lacked the fundamentals. They’re just rusty at doing things the old way. If the new way is simply better, good, that’s progress.


“Concerning that the same is happening in medical even for the experts.”
It isn’t.
Glad we cleared that up?

All else being equal, demand for a product increases when the prices of its complements decrease.
A fascinating perspective. Feels very “selfish gene.”
Having read the article, I don’t think the headline fits. I thought it was going to be about ChatGPT 5 falling flat. Y’know, leaving no room to downplay the hype, and getting stuck with unmistakable big claims.
I’m not convinced this angle makes sense, either. If there’s a part you monopolize, and a part you turn into a race to the bottom - OpenAI’s only participating in the race to the bottom. Everyone switched to use their thing because it’s a commodity. That doesn’t make them “the default place people go.” It makes them the current cheapest option. If every other big AI company is stuck in a dollar auction, unable to cut their losses, they’ll try to be even cheaper.
Jevons paradox says this doesn’t even threaten revenue. People will pay twice as much to use ten times more. If token quantity and speed are already useful (citation needed) then people will find new uses. If they’re dirt cheap, who cares? The applications considered become downright frivolous.


Tone policing, followed by essentialist insults. Zero self-awareness.
Meanwhile, I’ve repeatedly pointed out: these doctors have the skills. The machine only helps. You can’t or won’t engage with that.


Okay cool, that’s not what’s happening here.
These aren’t “vibe doctors.” They’re trained oncologists and radiologists. They have the skill to do this without the new tool, but if they don’t practice it, that skill gets worse. Surprise.
For comparison: can you code without a compiler? Are you practiced? It used to be fundamental. There must be e-mails lamenting that students rely on this newfangled high-level language called C. Those kids’ programs were surely slower… and ten times easier to write and debug. At some point, relying on a technology becomes much smarter than demonstrating you don’t need it.
If doctors using this tool detect cancer more reliably, they’re better doctors. You would not pick someone old-fashioned to feel around and reckon about your lump, even if they were the best in the world at discerning tumors by feel. You’d get an MRI. And you’d want it looked-at by whatever process has the best detection rates. Human eyeballs might be in second place.


No shit, it’s my analogy. And I made clear - the underlying skill still exists.
These doctors can still spot cancer. They’re just rusty at eyeballing it, after several months using a tool that’s better than their eyeballs.
X-rays probably made doctors worse at detecting tumors by feeling around for lumps. Do you want them to fixate on that skill in particular? Or would you prefer medical care that uses modern technology?


This is not that kind of AI. It’s not an LLM trained on WebMD. You cannot reason about this domain-specific medical tool, based on your experience with ChatGPT.


“I can do math by hand.”
“But what if you can’t?”
Incorrect.
It’s a little weird that wear leveling isn’t handled at the software level, given that you can surely pick free sectors randomly. Random access is nearly free. So is idle CPU time.