image description (contains clarifications on background elements)
Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg “big brother is watching” poser, two images of fluttershy (a pony from my little pony) one of them reading “u only kno my swag, not my lore”, a picture of parkzer parkzer from the streamer “dougdoug” and a slider gameplay element from the rhythm game “osu”. The background is made light so that the text can be easily read. The text reads:
i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?
IMAGE DESCRIPTION END
i hope this doesn’t cause too much hate. i just wanna know what u people and creatures think <3
I wish people stopped treating these fucking things as a knowledge source, let alone a reliable one. By definition they cannot distinguish facts, only spit out statistically correct-sounding text.
Are they of help to your particular task? Cool, hope the model you’re using hasn’t been trained on stolen art, or doesn’t rely on traumatizing workers on the global south (who are paid pennies btw) to function.
Also, y’know, don’t throw gasoline to an already burning planet if possible. You might think you need to use a GPT for a particular task or funny meme, but chances are you actually don’t.
That’s about it for me I think.
edit: when i say “you” in this post i don’t mean actually you OP, i mean in general. sorry if this seems rambly im sleep deprived as fuckj woooooo
peeps who use these models for facts are obv not aware what the models are doing. they don’t know that these models are just guessing facts.
also yes, big sad about peeps in the south being paid very poorly.
can totally see your point, thank you for commenting! <3
What does “AI for disabled people” entail? A lot of ‘good AI’ things I see are things I wouldn’t consider AI, e.g. VLC’s local subtitle generation.
I think generative AI is mainly a tool of deception and tyranny. The use cases for fraud, dehumanization and oppression are plentiful. I think Iris Meredith does a good job of highlighting the threat at hand. I don’t really care about the tech in theory: what matters right now is who builds it and how it is being deployed onto the world.
This list is missing: AI generated images are not art.
i also think that way, but it’s also true that generated images are being used all over the web already, so people generally don’t seem to care.
I disagree, but I can respect your opinion.
I used to think image generation was cool back when it was still in the “generating 64x64 pictures of cats” stage. I still think it’s really cool, but I do struggle to see it being a net positive for society. So far it has seemed to replace the use of royalty free stock images from google more than it has replaced actual artists, but this could definitely change in the future.
There are some nicer applications of image generation too, like dlss upscaling or frame generation, but I can’t think of all that much else honestly.
There is an over arching issue with most of the extant models being highly unethical in where they got their data, effectively having made plagiarism machines.
It is not ok to steal the content of millions of small independent creators to create slop that drowns them out. Most of them were already offering their work for free. And I am talking about LMs here, writing is a skill.
Say what ever you want about big companies being bad for abusing IP laws, but this is not about the laws, not even paying people for their work, this is about crediting people when they do work, acknowledging that the work they did had value, and letting people know where they can find more.
Also, I don’t really buy the “it’s good for disabled people” that feels like using disabled people as a shield against criticism, and I’ve yet to see it brought up in good faith.
I think we should avoid simplifying it to VLMs, LMs, Medical AI and AI for disabled people.
For instance, most automatic text capture ais (optical Character Recognition, or OCR) are powered by the same machine learning algorithms. Many of the finer-capability robot systems also utilize machine learning (Boston Dynamics utilizes machine learning for instance). There’s also the ability to ID objects within footage, as well as spot faces and referencing it with a large database in order to find the person with said face.
All these are Machine Learning AI systems.
I think it would also be prudent to cease using the term ‘AI’ when what we actually are discussing is machine learning, which is a much finer subset. Simply saying ‘AI’ diminishes the term’s actual broader meaning and removes the deeper nuance the conversation deserves.
Here are some terms to use instead
- Machine Learning = AI systems which increase their capability through automated iterative refinement.
- Evolutionary Learning = a type of machine learning where many instances of randomly changed AI models (called a ‘generation’) are run simultaneously, and the most effective is/are used as a baseline for the next ‘generation’
- Neural Network = a type of machine learning system which utilizes very simple nodes called ‘neurons’ for processing. These are often used for image processing, LMs, and OCR.
- Convolution Neural Network (CNN) = a Neural network which has an architecture of neuron ‘fliters’ layered over each other for powerful data processing capabilities.
This is not exhaustive but hopefully will help in talking about this topic in a more definite and nuanced fashion. Here is also a document related the different types of neural networks
Mr crabs would use unethical llms, very accurate
true, he would totally replace his workers with robots, and then complain about hallucinated recipes.
A lot of those points boil down to the same thing: “what if the AI is wrong?”
If it’s something that you’ll need to check manually anyway, or where a mistake is not a big deal, that’s probably fine. But if it’s something where a mistake can affect someone’s well-being, that is bad.
Reusing an example from the pic:
- Predicting 3D structures of proteins, as in the example? OK! Worst hypothesis the researchers will notice that the predicted structure does not match the real one.
- Predicting if you have some medical problem? Not OK. A false negative can cost a life.
That’s of course for the usage. The creation of those systems is another can of worms, and it involves other ethical concerns.
of course using ai stuffs for medical usage is going to have to be monitored by a human with some knowledge. we can’t just let it make all the decisions… quite yet.
in many cases, ai models are already better than expert humans in the field. recognizing cancer being the obvious example, where the pattern recognition works perfectly. or with protein folding, where humans are at about 60% accuracy, while googles alphafold is at 94% or so.
clearly humans need to oversee AIs output, but we are getting to a point where maybe humans make the wrong decision, and deny an AIs correct generation. so: no additional lives are lost, but many more could be saved
In my experience, the best uses have been less fact-based and more “enhancement” based. For example, if I write an email and I just feel like I’m not hitting the right tone, I can ask it to “rewrite this email with a more inviting tone” and it will do a pretty good job. I might have to tweak it, but it worked. Same goes for image generation. If I already know what I want to make, I can have it output the different elements I need in the appropriate style and piece them together myself. Or I can take a photograph that I took and use it to make small edits that are typically very time consuming. I don’t think it’s very good or ethical for having it completely make stuff up that you will use 1:1. It should be a tool to aid you, not a tool to do things for you completely.
There are so many different things that are called AI, the term AI doesn’t have any meaning whatsoever. Generally it seems to mean anything that includes machine learning somewhere in the process, but it’s largely a marketing term.
Stealing art is wrong. Using ridiculous amounts of power to generate text is ridiculous. Building a text model that will very confidently produce misinformation is pretty dumb.
There are things that are called AI that are fine, but most aren’t.
Smorty!!!
Thank you for this conversationi don think i understand your comment…
or maybe that’s the point?
or maybe ur making a funi joke about u being an AI assistant?
If so:
haha lol that's so hilarious<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|fim_prefix|>func get_length(vec1:Vector2) -> float:\n<|fim_suffix|> return length\n\nyea i like LMs kinda a smol bit and like experimenting with em a lot, cuz it's kinda fun to test their capabilities and such
if not: pls explain <3
if not: pls explain <3
response output --verbose:
Line 1:Smorty!!!
Explanation: You brighten my day every time I see you doing your thing. Line 1 expresses this joy.
Line 2:Thank you for this conversation
Explanation: I am glad to see peoples’ replies to your post. Line 2 thanks you for starting this discussion.really??? i didn’t kno i make u comf when i post a thing!! ~ i’m very happi about that!!! <3
also, i’m surprsied that u still like the fact that i made this convo spring up. many peeps are very one-sided about this, and i recognize that i am more pro-ai than con-ai. i wanted to hear peeps’s thoughts about it, so i jus infodump in a image with fluttershy in it, and now we are here!
i would think that u wouldn’t like this kind of very adult topic about ai stuffs but apparenty u are oki with me asking very serious things on here…
i hope u have a comf day and that u sleep well and that u eat something nice!!! <3
I’ll just repeat what I’ve said before, since this seems like a good spot for this conversation.
I’m an idiot with no marketable skills. I want to write, I want to draw, I want to do a lot of things, but I’m bad at all of them. gpt like ai sounds like a good way for someone like me to get my vision out of my brain and into the real world.
My current project is a wiki of lore for a fictional setting, for a series of books that I will never actually write. My ideal workflow involves me explaining a subject as best I can to the ai (an alien technology or a kingdom’s political landscape, or drama between gods, or whatever), telling the ai to ask me questions about the subject at hand to make me write more stuff, repeat a few times, then have the ai summarize the conversation back to me. I can then refer to that summary as I write an article on the subject. Or, me being lazy, I can just copy-pasta the summary and that’s the article.
As an aside, I really like chatgpt 4o for lore exploration, but I’d prefer to run an ai on my own hardware. Sadly, I do not understand github and my brain glazes over every time I look at that damn site.
It is way too easy for me to just let the ai do the work for me. I’ve noticed that when I try to write something without ai help, it’s worse now than it was a few years ago. generative ai is a useful tool, but it should be part of a larger workflow, it should not be the entire workflow.
If I was wealthy, I could just hire or commission some artists and writers to do the things. From my point of view, it’s the same as having the ai do the things, except it’s slower and real humans benefit from it. I’m not wealthy though, hell, I struggle to pay rent.
The technology is great, the business surrounding it is horrible. I’m not sure what my point is.
I’m sorry, but did you ever think of the option to try? To write a story you have to work on it and get better.
GPT or llms can’t write a story for you, and if you somehow wrangle it to write a story without losing it’s thread - then is it even your story?
look, it’s not going to be a good story if you don’t write it yourself. There’s a reason for why companies want to push it, they don’t want writers.
I’m sure you can write something, but that you have issues which you need to deal with before you can delve into this. I’m not saying it’s easy, but it’s worth it.
Also read books. Read books to become a better writer.
PPS. If you make an llm write it you’ll come across issues copyrighting it, at least last I heard.
LMs give the appearance of understanding, but as soon as you try to use them for anything that you actually are knowledgable in, the facade crumbles.
Even for repetitive tasks, you have to do a lot of manual checking to ensure they did not start hallucinating half way through.
I’ve heard this argument so many fucking times and i hate genai but there’s no practical difference between understanding and having the appearance of such, that is just a human construct that we use to try to feel artificially superior ffs
No. I am not saying that to put man and machine in two boxes. I am saying that because it is a huge difference, and yes, a practical one.
An LLM can talk about a topic for however long you wish, but it does not know what it is talking about, it has no understanding or concept of the topic. And that shines through the instance you hit a spot where training data was lacking and it starts hallucinating. LLMs have “read” an unimaginable amount of texts on computer science, and yet as soon as I ask something that is niche, it spouts bullshit. Not it’s fault, it’s not lying; it’s just doing what it always does, putting statistically likely token after statistically liken token, only in this case, the training data was insufficient.
But it does not understand or know that either; it just keeps talking. I go “that is absolutely not right, remember that <…> is <…,>” and whether or not what I said was true, it will go "Yes, you are right! I see now, <continues to hallucinate> ".
There’s no ghost in the machine. Just fancy text prediction.
I haven’t really used AIs myself, however one of my brothers loves AI for boilerplate code which he of course looks over afterwards. If it saves time and you only have to do some minor editing then that seems like a win to me. Probably shouldn’t be used like this in any non-hobby project by people who aren’t adept at coding however
I’m a programmer as well. When ChatGPT & Co initially came out, I was pretty excited tbh and attempted to integrate it into my workflow, which kinda worked-ish? But was also a lot of me being amazed by the novelty, and forgiving of the shortcomings.
Did not take me long to phase them out again though. (And no, it’s not the models I used; I have tried again now and then with the new, supposedly perfect-for-programming models, same results). The only edgecase where they are generally useful (to me at least) are simple tasks that I have some general knowledge of (to double theck the LM’s work) but not have any interest in learning anything further than I already know. Which does occur here and there, but rarely.
For everything else programming-related, it’s flat out shit.I do not beleive they are a time saver for even moderately difficult programs. Bu the time you’ve run around in enough circles, explaining “now, this does not do what you say it does”, “that’s the same wring answer you gave me two responses ago”, “you have hallucinated that function”, and found out the framework in use dropped that general structure in version 5, you may as well do it yourself, and actually learn how to do it at the same time.
For work, I eventually found that it took me longer to describe the business logic (and do the above dance) than to just… do the work. I also have more confidence in the code, and understand it completely.
In terms of programming aids, a linter, formatter and LSP are, IMHO, a million times more useful than any LM.
Honest question, how does AI help disabled people, or which kinds of disabilities?
One of the few good uses I see for audio AI is translation using the voice of the original person (though that’d deal a significant blow to dubbing studios)
fair question. i didn’t think that much about what i meant by that, but here’s the obvious examples
- image captioning using VLMs, including detailed multi-turn question answering
- video subtitles, already present in youtube and VLC apparently
i really should have thought more about that point.