• 5 Posts
  • 265 Comments
Joined 9 months ago
cake
Cake day: March 2nd, 2024

help-circle





  • jsomae@lemmy.mltoAsklemmy@lemmy.mlIs there any hope for me?
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 days ago

    This isn’t entirely correct. It’s kind of like saying “SAT score” is a racist pseudoscience – which honestly I can kind of get behind, heh. “IQ” is not a property of a human the way height or eye colour is, it’s just a test score. Yes, it’s used by racist people for racist ends, but racist people use everything for racist ends. The actual science behind IQ has always shown that (a) individual variation in IQ score is vastly, vastly greater than any potential racial factor in IQ, and (b) different research findings on racial averages in IQ score are varied enough that it’s hard to draw much of a conclusion. It’s also well known that IQ tests have a bias in favour of people from western developed nations. To me, it’s most likely that racial averages are similarly biased by the test.

    Dowsing is a pseudoscience – it falls apart under scrutiny. But under scrutiny, IQ test scores still correlate with success just like SAT scores do. They are slightly heritable, just like SAT scores are. It sucks, but that’s our capitalist society for you. (Let’s revolt.)

    But to the OP, please understand that these correlations are nothing more than correlations, and they are meaningless when you zoom into the individual level. Statistics about groups of people only make broad guesses but are meaningless about individuals. Statistics say the average person has one ovary and one testicle. Statistics say the average American has never heard of lemmy. So, don’t let statistics define you – that would be pseudoscience.

    If it helps, remember this: it’s not scientific to say “my IQ is just 76.” You should say “My most recent IQ test score was 76.”















  • LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.

    I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

    By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.