• 0 Posts
  • 32 Comments
Joined 5 months ago
cake
Cake day: February 15th, 2025

help-circle


    1. they can already block VPN traffic (unless you use their VPN)

    2. their whole business model is based on them being a man in the middle that decrypts ssl and analyses the packets plainly

    3. about a third of the worldwide websites are using cloudflare so they have a pretry good birds eye view on behaviour of any machine, datacenter or ip range that will be visiting a lot of websites, which in turn will trivially whether it is normal user behaviour or a crawler.










  • Sounds like you are in head over heels.

    Pterodactyl has a discord, why don’t you go there for dedicated support?

    Regardless of where you ask - if you want help you should provide detailed information. Tell exactly what commands you entered, from start to finish, not skipping anything and provide the outputs that you’ve gotten, especially the errors.


  • HelloRoot@lemy.loltoPrivacy@lemmy.mlAI to make us more private?
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    Good input, thank you.


    As far as I know, none of them had random false data so I’m not sure why you would think that?

    You can use topic B as an illustration for topic A, even if topic B does not directly contain topic A. For example: (during a chess game analysis) “Moving the knight in front of the bishop is like a punch in the face from mike tyson.”


    There are probably better examples of more complex algorithms that work on data collected online for various goals. When developing those, a problem that naturaly comes up would be filtering out garbage. Do you think it is absolutely infeasable to implement one that would detect adnauseum specifically?



  • HelloRoot@lemy.loltoPrivacy@lemmy.mlAI to make us more private?
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    adnauseun (firefox add on that will click all ads in the background to obscure preference data)

    is what the top level comment said, so I went off this info. Thanks for explaining.

    Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

    I didn’t mean it like that.

    I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

    In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

    But ofc I may be wrong. Cheers


  • HelloRoot@lemy.loltoPrivacy@lemmy.mlAI to make us more private?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    You are just moving the problem one step further, but that doesn’t change anything (if I am wrong please correct me).

    You say it is ad behaviour + other data points.

    So the picture of me they have is: [other data] + doesn’t click ads like all the other adblocker people (which is accurate)

    Why would I want to change it to: [other data] + clicks ALL the ads like all the other adnauseum people (which is also accurate)

    How does adnauseum or not matter? I genuinely don’t get it. It’s the same [other data] in both cases. Whether you click on none of the ads or all of the ads can be detected.


    As a bonus, if adnauseum would click just a couple random ads, they would have a wrong assumption of my ad clicking behaviour.

    But if I click none of the ads they have no accurate assumption of my ad clicking behaviour either.

    Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests, which are collected via other browsing behavious from your ad clicking behaviour if they contradict each other or when one of the two seems random.


  • On linux you can also use vmtouch to force cache the project files in RAM. This would speed up the first compilation of the day. On repeated compilations files that are read from disk would naturally be in the RAM cache already and it would not matter what drive you have.

    I have used this in the past when I had slow drives. I was forcing all necessary system libs (my IDE, jdk etc.) and my project files into RAM at the start of the day, before going on a 2min break to make coffee while it read all that stuff from a hdd. Which sped up the workflow in general, at least at the start of the day.

    It is not the same as a ramdisk, as the normal linux file cache writes back changes to the disk in the background.

    You can also pin your fastest core to a specific process, so that it gets no tasks except for the one you want it to do. But that seems more hassle than it’s worth, so I never tried that.