• 0 Posts
  • 262 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle



  • I think part of it is that if things suck it’s an excuse to not do much, things are going to shit and I’m just trying to keep my head up nothing really matters anyway we’re all doomed… Compared to things are getting better, the future is looking positive, I’m not really doing much though.

    It’s why people are so upset at every positive new development in anything and instantly start exaggerating flaws or just refusing to acknowledge it. Post an article backed by real science that mentions we’re doing well transitioning away from Carbon and you’ll get a couple of doubtful comments and a few votes then post a random guy on the street saying ‘i don’t really know anything about it but we’re probably doomed’ and it’ll be front page for days

    Another example is the sheer amount of people who claim they can’t think of a single good use for natural language computing, they’ve read a hundred articles/memes about ai bad but not a single one mentioned any of the ways it could save lives, improve life style and productivity, etc…

    People don’t want hope, they want an excuse



  • That’s ridiculous, if I think that one apple plus one apple is going to result in three apples then I try it and find it’s actually two I’m not going to blame the universe I’m going to know my understanding of arithmetic is flawed.

    It’s not the math of the human made model that controls the universe that would be silly. The model is the current best approximation of the actual math that defines the universe.

    An accurate model allows you to predict the outcome of events, like we can predict how many apples will be in the bag. With some things it gets very complicated because there’s lots of things and various possible states but we can model that with statistics and calculous and stuff. We can even make a set of all possibile results and use that as a map to tell us if something is possible, how likely it is and what we can do to make it more or less likely - nothing the guy said was controversial, we can map cellar interactions even if that requires using complex multidimensional math (mathematics have had to get used to doing this sort of math a lot recently so I’m sure they’ll manage)



  • Yes that’s a model, I’m not saying the model magically controls reality but that the underpinning reality is math - the reason you always have two apples in a bag when you start with one and add another is because of math, the human model of that isn’t controlling it but if we want two apples in a bag and we currently only have one then we can use our model to determine how many apples we need to add into the bag.

    The same is true of more complex systems, if we can accurately model the cellar interactions then we can derive solutions in the same way



  • But it doesn’t do that instantly and it does it for good reason, eyes and the sections of the brain using them require energy and are vulnerable to infection so in situations where they don’t provide an advantage they increase the likelihood of death before breeding thus giving any offspring born with less energy devoted to eyes has a small advantage which over s very long time results in them being selected away.

    So unless the creatures reach a perfect form for their environment then they’ll always be in the process of changing and have some of the old junk in there. Also if the formerly useful part doesn’t make any real difference to survivability there’s no force driving it to be selected away from, it might eventually be removed by lots of pure chance events but that’s going to take a huge amount of generations meaning the middle time where there’s junk not yet removed us going to be very long




  • If you ask it to make up nonsense and it does it then you can’t get angry lol. I normally use it to help analyse code or write sections of code, sometimes to teach me how certain functions or principles work - it’s incredibly good at that, I do need to verify it’s doing the right thing but I do that with my code too and I’m not always right either.

    As a research tool it’s great at taking a basic dumb description and pointing me to the right things to look for, especially for things with a lot of technical terms and obscure areas.

    And yes they can occasionally make mistakes or invent things but if you ask properly and verify what you’re told then it’s pretty reliable, far more so than a lot of humans I know.


  • Why would I rebut that? I’m simply arguing that they don’t need to be ‘intelligent’ to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you’re being absurd.

    The way the comment I responded to was written makes no sense to reality and I addressed that.

    Again as I said in other comments you’re arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that’s not what anyone sane is suggesting.

    A fork isn’t great for eating soup, neither is a knife required but that doesn’t mean they’re not incredibly useful eating utensils.

    Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It’s hugely useful and unlocks a vast amount of potential but it’s not going to slap anyone for joking about it’s wife.


  • People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

    A lot of the newer implementations use more complex methods of fact verification, it’s not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we’re seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.


  • That’s only true on a very basic level, I understand that Turings maths is complex and unintuitive even more so than calculus but it’s a very established fact that relatively simple mathematical operations can have emergent properties when they interact to have far more complexity than initially expected.

    The same way the giraffe gets its spots the same way all the hardware of our brain is built, a strand of code is converted into physical structures that interact and result in more complex behaviours - the actual reality is just math, and that math is almost entirely just probability when you get down to it. We’re all just next word guessing machines.

    We don’t guess words like a Markov chain instead use a rather complex token system in our brain which then gets converted to words, LLMs do this too - that’s how they can learn about a subject in one language then explain it in another.

    Calling an LLM predictive text is a fundamental misunderstanding of reality, it’s somewhat true on a technical level but only when you understand that predicting the next word can be a hugely complex operation which is the fundamental math behind all human thought also.

    Plus they’re not really just predicting one word ahead anymore, they do structured generation much like how image generators do - first they get the higher level principles to a valid state then propagate down into structure and form before making word and grammar choices. You can manually change values in the different layers and see the output change, exploring the latent space like this makes it clear that it’s not simply guessing the next word but guessing the next word which will best fit into a required structure to express a desired point - I don’t know how other people are coming up with sentences but that feels a lot like what I do



  • I use LLMs to create things no human has likely ever said and it’s great at it, for example

    ‘while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn’

    When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

    You can’t ask a parrot to invent words with meaning and use them in context, that’s a step beyond repetition - of course it’s not full dynamic self aware reasoning but it’s certainly not being a parrot


  • But also the people who seem to think we need a magic soul to perform useful work is way way too high.

    The main problem is Idiots seem to have watched one too many movies about robots with souls and gotten confused between real life and fantasy - especially shitty journalists way out their depth.

    This big gotcha ‘they don’t live upto the hype’ is 100% people who heard ‘ai’ and thought of bad Will Smith movies. LLMs absolutely live upto the actual sensible things people hoped and have exceeded those expectations, they’re also incredibly good at a huge range of very useful tasks which have traditionally been considered as requiring intelligence but they’re not magically able everything, of course they’re not that’s not how anyone actually involved in anything said they would work or expected them to work.