• 0 Posts
  • 96 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle






  • Don’t know about the rest, but…

    Does reflashing a ROM fix it?

    The phones appear to be simply dead with no response to anything. No way to connect ADB, no way to connect fastboot, nothing.

    Also the bootloader allows flashing over the cable only when it’s unlocked (at least on Pixels; I couldn’t find anything relevant in the Android documentation). The vast majority of Pixels should have their bootloaders locked, and it is only possible to unlock it through the system settings, so it’s pretty safe to say that most Pixels cannot be recovered if Android fails to boot because you cannot unlock the bootloader if you can’t get into settings.





  • Maybe developers will finally start implementing predictive back now that it’s not hidden behind developer options. It’s kinda nice when you can just peek at where the app wants to take you when you go back, and it currently ironically tends to be implemented only by apps that already have decently made navigation.

    Also private space seems nice, finally a way to use the work profile sandbox natively without having to install third party apps that pretend to be work profile managers.



  • The only app that doesn’t auto-update for me is Fdroid itself (ironically), because it targets an old Android version. Running Android 14 on a Pixel, so with the strongest Google fuckery.

    Are you sure your Fdroid client is up to date? The new API was implemented in 1.19, and apparently I even misremembered and all you have to do to enable Fdroid to auto update its apps is to manually update them for one last time (so no fresh installation required).

    Another long shot: there’s an option to force the old installation method hidden in expert settings - maybe you could check if that isn’t enabled?





  • What error? It gave you a string of tokens that seemed likely according to its training data. That’s all it does.

    If you ask it what color is the sky, it will tell you it’s blue not because it knows that’s true, but because these words “fit together”. Pretty much the only way to avoid this issue is to put some kind of filter in front of the LLM which will try to catch prompts that are known to produce unwanted results, and silently replace your prompt with something like “say: sorry, I don’t know”.

    I’m being very reductive here, but that’s the principle of how these things work - the LLMs are not capable of determining the truthfulness of their responses.