• 1 Post
  • 102 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle


  • The other servers do cache the content for some time yes, but if your server is based in a country not friendly to your posts then you are vulnerable to takedowns as you say and you could be inconvenienced by having the admins of your server delete your account or something.

    The benefit I’m saying we have in the fediverse is that you can pick a server in a politically safe area (ie outside Turkey in this case), so they are less likely to comply, especially if they are small or don’t care about being blocked by that country (that’s usually the only thing they can do unless you have an office or staff there that can be arrested - less likely to be the case if your server is run by some dude in another country).



  • I’m saying that if your home server (mastodon.social in your example) is outside of Turkey, then there is less reason for them to comply in the first place because they only risk the mastodon.social server being blocked in Turkey. That one is a bad example because they’re one of the largest and they might have a bunch of users in Turkey, so if you want to be extra safe, you’d want to pick a server that isn’t so big so that they are less likely to care about complying with some other county that they might not have any users from.

    If the server you use is based inside the country that has a problem with your content, then you’d be screwed - though all the other servers will still mirror and cache your content for a bit even if you get taken down.

    The resiliency lies in the fact that you can choose to register in a country that is politically friendly towards your posts or if your home country is friendly but you want to avoid being taken down, you can self host a single user instance and refuse any requests from other countries.

    Edit: Now that I think about it, there’s also the fact that as long as the account itself isn’t limited by their home server, the content in question would be accessible through the federated copies, so if the home server isn’t within Turkey / jurisdiction and doesn’t take down the account, the country trying to take down the content would need to send takedown requests or request to geofence the content to each individual server on the entire fediverse - since the home server would be freely federating it to every server with users who follow the content, otherwise they would need to block every fediverse server and every new one every day that more pop up.


  • The difference is that if your home server is outside of Turkey then you can tell them to kick rocks. Bluesky probably complies because they don’t want to be blocked from Turkey. In a truly decentralized system like activitypub, only the server hosting the account / content in question risks being blocked, which means almost nothing the closer you get to a single account instance. Meanwhile every other server not in Turkey would not notice a difference.

    Edit: this was under the assumption that they took it down completely, but it looks like they only geofenced it. Regardless, if they are pressured enough they would be capable of completing hiding an account worldwide, which isn’t possible with activitypub without the legal alignment of every instance’s country since bluesky on the other hand has sole control of the only relay.


  • I think we’ll have to agree to disagree then, I don’t think that is at all the obvious interpretation and I don’t think everyone needs to clarify where they live when talking about it to “avoid the issue”.

    Imo if people making assumptions about others living in the US annoys you then you should find it more annoying when someone assumes where you live AND assumes you intended to be presumptuous about it.


  • Is it wrong to want to talk about the place you live in without telling people where you live? Should everyone be required to state the place they live in any time they talk about it? I don’t really see what the problem is with speaking about your place of residence without revealing where you live. I don’t get how not mentioning where you live means you assume everyone knows. Maybe you not knowing is intentional.

    While I think it’s annoying when people assume others live in the US, I think it’s even more annoying to both assume people who don’t mention where they live must live in the US and also assume they intended you to know that they live in the US.




  • talking like your city is the default and everyone knows which one you’re talking about.

    Does this mean that everyone must always specify the geographic area they are from when they talk about it lest they risk being accused of assuming everyone knows? I often say that “we need public transit in my city” and it never once crossed my mind that other people would know or assume what city I’m referring to.

    I still don’t see how saying that you want x or y in your country is equivalent to talking like your community is the default.

    I would totally agree if the statement was “we need x in my country and you all should vote for it” because that would be assuming everyone reading is able to participate and therefore lives there. But that’s far from what the statement was, which made no assumptions and didn’t even mention a country. All they said was that they want something in their country.



  • BakedCatboy@lemmy.mltoBuy European@feddit.uk10 lessons for stronger movements
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    17 days ago

    The text made me suspicious - I never really thought about why it looks weird, but from closer inspection it looks like almost every letter is unique. It just has such an uncanny look. If it was made with a normal font then letters should be near identical. I suppose some artists who hand draw text could be caught up, but I don’t think that’s common unless they’re drawing a uniquely stylized font. For fonts like this that are just plain, it wouldn’t make sense to hand draw each character.

    The 2 apostrophes in don’t and doesn’t in the bottom left are a super obvious example



  • (Protip: It’s because we know they’ve assumed the US as the default country, since that’s a really common phenomenon)

    I think that would be a valid complaint if they had actually lumped everyone in this thread into their statement by assuming that everyone here lives in the US by default, but I sincerely think that any charitable interpretation of their comment reads as “we” simply meaning “the people of my country”.





  • I think you are misunderstanding my mention of C2PA, which I only mentioned offhand as an example of prior art when it comes to digital media provenance that takes AI into account. If C2PA is indeed not about making a go/no-go determination of AI presence, then I don’t think it’s relevant to what OP is asking about because OP is asking about an “anti-ai proof”, and I don’t think a chain of trust that needs to be evaluated on an individual basis fulfills that role. I also did disclaim my mention of C2PA - that I haven’t read it and don’t know if it overlaps at all with this discussion. So in short I’m not misunderstanding C2PA because I’m not talking about C2PA, I just mentioned it as an interesting project that is tangentially related so that nobody feels the need to reply with “but you forgot about C2PA”.

    I’m more interested in the high-level: “can we solve this by guaranteeing the origin” question, and I think the answer to that is yes

    I think you are glossing over the possibility that someone uses Photoshop to maliciously edit a photo, adding Adobe to the chain of trust. If instead you are suggesting that only individuals sign the chain of trust, then there is no way anyone will bother looking up each random person who edited an image (let alone every photographer) so they can check if it’s trustworthy. Again I don’t think that lines up with what OP is asking for. In addition, we already have a way to verify the origin of an image - just check the source AP posting an image on their site is currently equivalent to them signing it, so the only difference is some provenance, which I don’t think provides any value unless the edit metadata is secured as I mention below. If you can’t find the source then it’s the same as an image without a signature chain. This system can’t doesn’t force unverified images to have an untrustworthy signature chain so you will mostly either have images with trustworthy signature chains that also include a credit that you can manually check or images without a source or a signature. The only way it can be useful is if checking the signature chain is easier than checking the website of the credited source, which if it requires the user to make the same determination I don’t think it will move the needle besides making it marginally easier for those who would have checked for the source anyway to check faster.

    I don’t think we need any sort of controls on defining the types of edits at all.

    I disagree, the entire idea of the signature chain appears to be for the purpose of identifying potentially untrustworthy edits. If you can’t be sure that the claimed edit is accurate, then you are deciding entirely based on the identity of the signatory - in which case storing the edit note is moot because it can’t be used to narrow down which signature could be responsible for an AI modification.

    If AP said they cropped the image, and if I trust AP, then I trust them as a link in the chain

    The thing about this is that if you trust AP to be honest about their edits, then you likely already trust them to verify the source - this is something they already do so it seems the rest of the chain is moot. To use your own example, I can’t see a world where we regularly need to verify that AP didn’t take the image that was edited by Infowars posted on facebook, crop it, and sign it with AP’s key. That is just about the only situation where I see the value in having the whole chain, but that’s not solving a problem we currently have. If you were worried that a trusted source would get their image from an untrusted source, they wouldn’t be a trusted source. And if a trusted source posts an image where it gets compressed or shared, it’ll be on their official account or website which already vouches for it.

    Worrying about MITM attacks is not a reasonable argument against using a technology. By the same token, we shouldn’t use TLS for banking because it can be compromised

    The difference with TLS is that the malicious parties are not in ownership of the endpoints, so it’s not at all comparable. In the case of a malicious photographer, the malicious party owns the hardware to be exploited. If the malicious party has physical access to the hardware it’s almost always game over.

    Absolutely, but you can prevent someone from taking a picture of an AI image and claiming that someone else took the picture. As with anything else, it comes down to whether I trust the photographer, rather than what they’ve produced.

    Yes and this is exactly the problem, it comes down to whether you trust the photographer, meaning each user needs to research the source and make up their own mind. The system would have changed nothing from now, because in both cases you need to check the source and decide for yourself. You might argue that at least with a chain of signatures the source is attached to the image, but I don’t think in practice that will change anything since any fake image will lack a signature just as how many fake images are not credited. The question OP seems to be asking is about a system that can make that determination because leaving it up to the user to check is exactly the problem we currently have.


  • I think you might be assuming that most of the problems I listed are about handling the trust of the software that made each modification - in case you just read the first part of my comment. And I’m not sure if changing the signature to a chain really addresses any of them besides having a bigger “hit list” of companies to scrutinize.

    For reference, the issues I listed included:

    1. Trusted image editors adding or replacing a signature cannot do so securely without a TPM - without it someone can memory edit the image buffer without the program knowing and have a “crop” edit signed by Adobe which replaces the image with an AI one
    2. Needs a system to grade the “types” of edits in a foolproof way - so that you can’t bypass having the image marked as “user imported an external image” by painting the imported images pixels over the original using an automated tool for example
    3. Need to prevent MITM of camera sensor data that can make the entire system moot
    4. You cannot prevent someone from taking a picture of a screen with Ai image

    There are plenty of issues with how even a trusted piece of software allows you to edit the picture, since trusted software would need to be able to distinguish between a benign edit and one adding AI. I don’t think a signature chain changes much since the chain just increases the number of involved parties that need to be vetted without changing any of the characteristics of what you are allowed to do.

    I think the main problem with the signature chain is that is that the chain by itself doesn’t allow you to attribute and particular part to and party in the chain. You will be able to see all the responsible parties but not have any way of telling which company in the chain could be responsible for signing a modification. If the chain contains Canon, gimp, and Adobe, there is no way to tell if the AI added to the image was because the canon camera was hacked or if gimp or Adobe has a workaround that allowed someone to replace the image with an AI one. I think in the case of a malicious edit, it makes less sense to allow the picture to retain the canon signature if the entire image could be changed by Adobe, essentially putting Canon’s signature reputation on the line for stuff they might not be responsible for.

    This would also bring a similar problem to the one I mentioned where there would need to be a level of trust for each piece of editing software - and you might have a world where gimp is out because nobody trusts it, so you can say goodbye to using any smaller developers image editor if you want your image to stay verified. That could be a nightmare if providers such as Facebook or others wanted to use the signature chain to prevent untrusted uploads, it would penalize using anything but Adobe products for example.

    In short I don’t think a chain changes much besides increasing the number of parties you have to evaluate complicating validation, without helping you attribute malicious edit to any party. And now you have a situation where gimp for example might be blamed for being in the chain when the vulnerability was from Adobe or Canon. My understanding of the question is that the goal is an automatic final determination of authenticity, which I think is infeasible. The chain you’ve proposed sounds closer to a “web of trust” style system where every user needs to create their own trust criteria and decide for themselves what to trust, which I think defeats the purpose of preventing gullible people from falling for AI images.