Are_Euclidding_Me [e/em/eir]

  • 0 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: May 1st, 2023

help-circle


  • God damn you’re infuriating. You think I’m using “objective” measures of intelligence when I say so-called “AI” isn’t intelligent? Those “objective” measures of intelligence would agree with you, no? An LLM would do better on an IQ test than many humans, and yet I believe that humans truly think, whereas LLM’s only regurgitate. Isn’t that true? (To be clear, I don’t expect you to agree that LLM’s don’t think, I’m asking, rhetorically, whether the previous sentence is a fair summary of the facts and my point.)

    Tell me, what are the “aspects” of intelligence you want to “objectively” measure? Also, historically, measuring intelligence is problematic because of racism and sexism. It’s fucking bigotry, not stupidity, fucking hell. Unless you’re going to argue that bigotry arises from stupidity, in which case, well, you’ve got a lot to learn.

    I don’t think you’re deplorable, although I do think you might be a little immature, but I’m not going to push on that point, because I don’t really care. I don’t think you’re lesser in any way. I think you’re mistaken, that doesn’t mean less than. You’re as deserving of a decent life as I am, and I truly hope you’re living one, and continue to do so in the future.

    But I’m really done with this conversation. Feel free to get the last word in, I likely won’t respond. Please know I bear you no ill will, even though I firmly believe you’re entirely and completely wrong about so-called “AI”.


  • Well I have a pretty grim outlook on humanity,

    That sucks, I’m sorry. I think humans are actually pretty dang cool and good.

    The rest of your response is pretty nonsense, I gotta say. I think I need to stop talking to you. Good luck with your future life, I legitimately hope it’s good. I don’t know what I hoped to get out of this interaction, but hey, it’s happened, so, neat, I guess.

    One thing I should have been more clear about during our interactions is that I’m aware that simple building blocks can lead to complex emergent behavior, fucking of course they can, but I never said that explicitly, so that’s on me. I don’t believe the building blocks of so-called “AI” will lead to actual intelligence, but that doesn’t mean I don’t believe in complex emergent behavior, we’re all made of atoms, aren’t we?

    It worries me you didn’t even a little respond to my meanest two paragraphs, my arguments about objective measures of intelligence didn’t make any impact, I guess? Anyway, it doesn’t matter, I’ve said my piece, please be skeptical of IQ and other “objective” measures of intelligence.

    If I could leave you with one thought for the future, it would be: believe in humanity more. Humans are awesome and intelligent and worth believing in. Sure, it doesn’t feel like that these days, we’re killing the earth and causing untold amounts of suffering, for humans, non-human animals, and every other living thing on this earth, but I still think it’s true. The only hope for humanity is that humans find a way through, that we find a way to kill capitalism before it kills us.


  • they clearly do more than copy existing texts

    No kidding. They chop existing texts into tiny pieces and use statistics to decide which to print next. It doesn’t group text “rationally”, it groups text in such a way that convinces you it’s happened rationally. I’ve seen enough absolute nonsense to know there’s no rationality happening.

    it substitutes or fool by having at least some inkling of the meaning of words or can “intuit” a good response.

    Once again, no. It has no idea what words mean and the only reason it can (sometimes) give a good response is because it looks at which words and phrases tend to follow which other words and phrases in its massive, and ever increasing, training data sets.

    Maybe you should conclude is that humans are less intelligent that you think. Or as obi wan keobi said, the ability to speak doesn’t make you intelligent haha. If you pick a random topic and ask to write some text about it and it does better than a group of humans on the lower half of IQ, then you have objective evidence of intelligence. And that is what shocks and offends people about AI haha.

    This paragraph is fucked, and implies some pretty nasty things about your worldview. You might be correct that LLM’s can write better text than a portion of humanity, but to jump from that to saying LLM’s are more intelligent than that portion of humanity who don’t write as well is incredibly shitty! Writing ability is strongly correlated with education (obviously), so what you’re saying is that people who have had less opportunity for education are less intelligent. They aren’t, they just have less privilege. And bringing up the notoriously racist IQ as a proxy for intelligence is, uh, not a good look.

    I suspect you might be young, because I used to believe similar things about some sort of “objective intelligence”. I used to think that some people were just smarter than others and there was probably some objective way to measure that. (Unsaid, of course, is that I was one of the “smart ones”, it really flattered my ego.) As I’ve grown up I’ve realized that’s not fucking true, people have all sorts of different capabilities, and people who I once would have dismissed as “stupid”, well, they aren’t. They have less education than I do, not less intelligence.

    I also assume that it won’t take too long to create models that can combine both and add the ability to do math and boolean reasoning.

    If it were so straightforward, this would have happened by now. It hasn’t. I don’t believe it will.

    without an emotional or tribal bias.

    Everything humans make has an emotional or tribal bias. LLM’s are no different. They pick up the biases of their training sets, and it’s impossible to have a “bias-free” training set. Anyone promising “unbiased” or “objective” anything is someone you should watch out for, they’re lying, but they may not know that they’re lying.




  • Hey, thanks for responding to me. It’s interesting to see other people’s thoughts, even when (especially when) they’re so different from my own.

    I disagree with just about everything you’ve said here, but I’m not going to try very hard to convince you that you’re wrong, because I don’t think it’ll work and I don’t think it matters.

    I’ll just say, it’s not like I’ve never used an LLM. For the past year or so I’ve been working for one of those shitty, shitty AI training companies, trying to improve the mathematical reasoning capabilities of various state of the art LLM’s. In all that time, I’ve seen zero evidence that these fucking things can reason. They can regurgitate with the best of them, ask them to prove that 2 is prime or to find the zeros of f(x) = x^2 - 4, and they’ll perform perfectly, because those problems are found in every introductory textbook. But ask them something that requires synthesizing several bits of knowledge together and isn’t a standard problem found in every textbook, like finding the critical points of a relatively complicated function, and they completely shit the bed, responding with absolute nonsense. Not a slightly wrong reasoning chain, but straight up nonsense.

    I’ve been training these things for about a year. There are thousands of people, at just this one company, spending who knows how many thousands of hours training these things and I’ve seen zero improvement in reasoning capability. These things don’t reason, they regurgitate. The longer I do this shit, the more clear it becomes to me that so-called “AI” is a very well-disguised mechanical Turk! Everything it does it does because it’s copying straight from something a human has done.

    So that’s why I was curious what you get out of them. And reading your response, you pretty clearly believe they can reason and synthesize information, at least when coaxed properly. I’d suggest caution there, the responses you’re getting aren’t intelligent or thought out, they’re copied and chopped up opinions that real people have had, and it’s probably better to search out the people who’ve had the opinions. I’m sympathetic to the issue that there’s simply too much information available for anyone to interact with intelligently, I think that’s a real problem of the modern world, I’m just not convinced that trusting LLM’s to try to bridge that gap is a good idea, because of what I’ve seen of their (complete lack of) reasoning ability.

    Oh, just one more tiny little thing: there’s an ocean of difference between how well we understand brains versus how well we understand neural nets. We can construct neural nets, after all, and we sure as shit can’t construct a brain.


  • I’ve recently had a conversation with ChatGPT about Ukraine

    What do you get out of these conversations? I’ve been trying to figure out why people enjoy talking to LLM’s, and I straight up don’t get it. What’s the point of asking an LLM about geopolitics? Do you find its analysis accurate and compelling? I certainly don’t, I find it banal, contradictory, a meaningless mush of words that technically fit together to make sentences. These LLM’s don’t actually reason, we know that, because we know how they’re constructed. So I simply don’t understand, what’s the point? I get talking to a human, even a human with a deeply contradictory worldview. That’s interesting because with humans, we know there’s a mind there, so figuring out how that alien mind works can be fascinating, especially if the person we’re talking to is quite different to us. But we know how LLM’s work, the math behind them is quite straightforward. So again I ask: what is the point in talking to an LLM? What new thing are you learning about yourself, other people, or the world at large?




  • Hey, I was at one point a grad student. I was never paid less than minimum wage.

    In fact, I was paid more than groundskeepers and janitorial staff at the university I worked at.

    I say this not to say “pay grad students less”, but just to point out that the job “grad student” isn’t the miserable crushing poverty that people make it out to be. It just isn’t. Sure, it’s the closest to poverty that many people in academia experience, but it’s nothing like the real thing. The real thing is experienced by the staff of the university.

    Also notice how in academia there’s this idea of the university as a community. The non-faculty staff of a university are basically never considered part of this community. They keep it running, they make sure that students, faculty, and administration have nice clean facilities, but those facilities are not meant to be used by janitors, cooks, groundskeepers and the like. The people who take care of the facilities are not the same people who use them.

    Sorry, I know this comment is pretty unrelated, but I think it’s important to keep in mind that even though grad students are often treated pretty shittily, it’s simply not the same as how janitors, groundskeepers, cooks and so on are treated.




  • I found this page on their website, it’s real.

    Which leads me to an important question: who the fuck are these assholes?

    I’ve looked at their “Experts” page, I don’t recognize anyone, I’ve looked at their “About” page, it’s surprisingly low on info. Oh, their building in DC is “an iconic building that faces the Lincoln Memorial and the Vietnam Veterans Memorial, and that symbolizes our nation’s commitment to peace.” Ok, but like, why is that on your “About” page, you fucking weirdos?!

    So I’m at a loss. I’m sure (absolutely positive) there’s info to be found here about this deeply shitty organization, but I think I’d have to spend hours and hours looking for weird connections, because they seem oddly reticent to tell casual visitors to their website anything about anything. And that weirds me the hell out. Who are these fucks?! Who is reading these (deeply shitty) articles they’re writing?! Who funds them?! Why don’t they have any board members listed where I can find them?!




  • Fair, I’m just saying it’s not funny and I’d rather have never seen it. “AI” bullshit pisses me right the hell off and I’d like if it didn’t exist and I never had to see it. I’m actually fucking for real tilted and malding. I know it shouldn’t matter, it’s one fucking picture of an obviously fake wolf/fox, who even fucking knows, but I hate it. I absolutely fucking despise any image that was made by “AI” and I really, really wish people wouldn’t post anything of the sort except for the purpose of making fun of it.

    Once again, I’d like to make clear, I understand I’m the asshole here and my extreme hatred is too intense and you shouldn’t have to be subjected to it. So I am sorry for that. But I also would really, really like to live in a world without “AI” “art” and I’m straight up pissed that I don’t, but instead live in a world in which people sometimes (frequently) post “AI” images as though they’re worth engaging with. And they just fucking are not.


  • I hate this. This shitty fucking shit “AI” “art” fucking sucks. The hexagonal bear is cool because it’s an actual photo of an actual bear. This awful bullshit fucking sucks because it’s shitty goddamn “AI”.

    I’d really appreciate it if you wouldn’t post shitty “AI” shit as though it’s good. It isn’t.

    (I am sorry for how mean this comment is, because you’re an actual human being who’s probably pretty cool, so I hate being a fucking asshole to you, which I recognize I have been. But please, holy fuck, don’t post fucking “AI” art as though I’m supposed to like it. I’m an asshole for hating “AI” as much as I do, I’m sure, but I will die on this hill.)


  • Oh, is this Inception? I was vaguely excited to find out what it is and watch it, because I (for some reason, it certainly isn’t because he’s a good actor) enjoy Joseph Gordon-Levitt in things, and, well, of course you gotta love Tom Hardy. But I’ve already seen Inception (before I knew who these two people were, clearly), and, like, meh, it’s really not that good. “Matrix but straight” might honestly be overselling it. Damn. Here I was hoping for schlocky action with actors I like that I’d somehow missed out on and instead it’s takes-itself-too-seriously action with actors I like but I’ve already seen it and it’s not that good. Shucks.