i guess my issue is that neural networks as they exist now can’t emerge property, they are fitting to data to predict next word in the best way possible, or most probable in unknown sentence. It’s not how anybody learns, not mice, not humans.
Something akin to experiments with free floating robot arms with bolted on computer vision seem like much more viable approach, but there the problem is they don’t have right architecture to feed it into, at least i don’t think they do, and even then it will probably will stall out for a time at animal level.
my problem is at some point they’re gonna smoosh chatgpt and that sort of stuff and other shit together and it might be approximating consciousness but nerds will be like "it’s just math! " and it’ll make commander Data sad n’ they won’t even care
well of course they could, flawless imitation of consciousness, after all, is the same as consciousness (aside from morality, which will be unknowable), just not here at the moment
i guess my issue is that neural networks as they exist now can’t emerge property, they are fitting to data to predict next word in the best way possible, or most probable in unknown sentence. It’s not how anybody learns, not mice, not humans.
Something akin to experiments with free floating robot arms with bolted on computer vision seem like much more viable approach, but there the problem is they don’t have right architecture to feed it into, at least i don’t think they do, and even then it will probably will stall out for a time at animal level.
my problem is at some point they’re gonna smoosh chatgpt and that sort of stuff and other shit together and it might be approximating consciousness but nerds will be like "it’s just math!
" and it’ll make commander Data sad
n’ they won’t even care
well of course they could, flawless imitation of consciousness, after all, is the same as consciousness (aside from morality, which will be unknowable), just not here at the moment