• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle
  • https://contentauthenticity.org/how-it-works

    The page is very light on technical detail, but I think this is a system like trusted platform modules (TPMs), where there is a hardware root of trust in the camera holding the private key of an attestation certificate signed by the manufacturer at the time of manufacture, and it signs the pictures it takes. The consortium is eager for people to take this up (“open-source software!”) and support showing and appending to provenance data in their software. The more people do so, the more valuable the special content-authenticating cameras become.

    But TPMs on PCs have not been without vulnerabilities. I seem to recall that some manufacturers used a default or example private key for their CA certificates, or something. Vulnerabilities in the firmware of a content-authenticating camera could be used to jailbreak it and make it sign arbitrary pictures. And, unless the CAI is so completely successful that every cell phone authenticates its pictures (which means we all pay rent to the C2PA), some of the most important images will always be unauthenticated under this scheme.

    And the entire scheme of trusted computing relies on wresting ultimate control of a computing device from its owner. That’s how other parties can trust the device without trusting the user. It can be guaranteed that there are things the device will not do, even if the user wants it to. This extends the dominance of existing power structures down into the every-day use of the device. What is not permitted, the device will make impossible. And governments may compel the manufacturer to do one thing or another. See “The coming war on general computation,” Cory Doctorow, 28c3.

    What if your camera refused to take any pictures as long as it’s located in Gaza? Or what if spies inserted code into a compulsory firmware update that would cause a camera with a certain serial number to recognize certain faces and edit those people out of pictures that it takes, before it signs them as being super-authentic?




  • A name I’ve seen in connection with this issue is Obtainium. From a cursory look, it appears this just streamlines checking for and getting apk’s from GitHub release pages and other project-specific sources, rather than adding any trust. So maybe it just greases the slippery slope :)

    Security guidelines for mobile phones, and therefore policies enforced by large organizations (think Bring-Your-Own-Device), are likely to say that one may only install apps from the platform-provided official source, such as the Play Store for Android or the Apple App Store for iOS. You might say it’s an institutionalized form of “put[ting] too much trust in claims of authority.” Or you might say that it’s a formal cession of the job of establishing software trustworthiness to the platform vendors, at the mere expense of agency for users on those platforms.

    People are not taught how to verify the authenticity and legitimacy of software

    Rant: Mobile computing as we know it is founded on the rounding off of the rough corner of user agency, in order to reduce the amount users need to know in order to be successful, and to provide the assurances other players need, such as device vendors, employers, banks, advertisers, governments, and copyright holders. See The Coming War on General Computation, Cory Doctorow, 2011. Within such a framework, the user is not a trustworthy party, so the user’s opinion of authenticity and legitimacy, however well informed, doesn’t matter.






  • They are made (I think) to be implementable - even, to give implementors some flexibility. Then everybody goes and buys a tool to do it, and not that well. I thought 15 years ago that security configuration was a (voluminous) subset of system configuration and system administration, ripe for automation and rigorous documentation - not something to pay a different vendor for. But the market says otherwise. When you can split some work across a whole team, or even into a separate company, instead of glomming it into one job, that’s worth money to businesspeople.




  • There are many ways to be more selective about from whom to accept email. SPF, DKIM, DMARC, and various blacklists are among them. They are supposed to make life harder for spammers. But they have also made running a mail server something that few dare to try anymore. Setup is not easy, but getting blacklisted is, and it causes silent delivery failure, and takes days of work to fix.

    As a result, most of the email is run by Microsoft and Google. But that didn’t stop phishers. They just go after people at smaller companies where security isn’t as tight yet, and then they’ve got valid Microsoft accounts to send from. Liars and Outliers by Schneier is about this sort of dynamic.

    As for PKI: If I may assume you to be, or have been, affiliated with an armed service – Whose property is your CAC? And why did you use a pseudonym to make this post? (I mean to be pithy, not sarcastic.) I think Liars and Outliers by Schneier is all about this sort of thing - but I didn’t get much of it read before it was due back at the library.