LaTeXpOsEd: A Systematic Analysis of Information Leakage in Preprint Archives
69 points
2 days ago
| 8 comments
| arxiv.org
| HN
SiempreViernes
2 days ago
[-]
As far as I can tell they trawled a big archive for sensitive information, (unsurprisingly) found some, and then didn't try to contact anyone affected before telling the world "hey, there are login credentials to be found in here".
reply
crote
2 days ago
[-]
Don't forget giving it a fancy name in the hope that it'll go viral!

I am getting so tired of every vulnerability getting a cutesy pet name trying to pretend being the new Heartbleed / Spectre / Meltdown...

reply
wongarsu
2 days ago
[-]
Beats having to remember and communicate CVE numbers
reply
KeplerBoy
2 days ago
[-]
It's not like every datapoint comes with the email of the corresponding author.
reply
mseri
2 days ago
[-]
Google has a great aid to reduce the attack surface: https://github.com/google-research/arxiv-latex-cleaner
reply
Y_Y
2 days ago
[-]
I use this before submission and recommend others do too. If ai was in charge of arXiv Id have it integrated as an optional part of the submission process.
reply
JohnKemeny
1 day ago
[-]
Most people I know simply use `latexpand` which "flattens" all files into one tex-file and by default removes all comments.
reply
Jaxan
2 days ago
[-]
I do this by hand. In the whole process, cleaning up the TeX before submission is a small step. And I like to keep some comments, like explaining how some Tikz figures are made. Might help someone some day.
reply
andrepd
1 day ago
[-]
Yes but then we wouldn't have https://xcancel.com/LeaksPh
reply
barthelomew
2 days ago
[-]
Paper LaTeX files often contain surprising details. When a paper lacks code, looking at latex source has become a part of my reproduction workflow. The comments often reveal non-trivial insights. Often, they reveal a simpler version of the methodology section (which for poor "novelty" purposes is purposely obscured via mathematical jargon).
reply
seg_lol
2 days ago
[-]
Reading the LaTex equations also makes for easier (llm) translation into code rather than trying to read the pdf.
reply
sneela
2 days ago
[-]
I agree with other comments that this research treads a fine, unethical line. Did the authors responsibly disclose this, as is often done in the security research community? I cannot find any mention of it in the paper. The researchers seem to be involved in security-related research (first author is doing a PhD, last author holds a PhD).

At least arxiv could have run the cleaner [1] before the print of this pre-print (lol). If there was no disclosure, then I think this pre-print becomes unethical to put up.

> leading to the identification of nearly 1,200 images containing sensitive metadata. The types of data represented vary significantly. While device information (e.g., the camera used) or software details (such as the exact version of Photoshop) may already raise concerns, in over 600 cases the metadata contained GPS coordinates, potentially revealing the precise location where a photo was taken. In some instances, this could expose a researcher’s home address (when tied to a profile picture) or the location of research facilities (when images capture experimental equipment)

Oof, that's not too great.

[1] https://github.com/google-research/arxiv-latex-cleaner

reply
michaelmior
2 days ago
[-]
Having arXiv run the cleaner automatically would definitely be cool. Although I've found it non-trivial to get working consistently for my own papers. That said, it would be nice if this was at least an option.
reply
cycomanic
2 days ago
[-]
Leaks of read/write access to documents and GitHub, Dropbox etc credentials is certainly worrying, but location and author/photographer details in photo metadata? That's quite a stretch, and seems like the authors here are just trying to boost the numbers.

The vast majority (I would wager >(100 - 1e-4)) of location of research institutions is public knowledge and can be found out by simply googling the institution address (I am not aware of a single research institution that publishes publically where the location is confidential).

reply
calvinmorrison
2 days ago
[-]
They responsibly disclosed it in their research paper. An unethical use would be to use those coordinates to gain state secrets about say, research facilities
reply
cozzyd
2 days ago
[-]
This is why my forarxiv.tex make targets always include a call to latexpand --empty-comments

Though I doubt all my collaborators do something similar.

reply
agarttha
2 days ago
[-]
I offer free beer in a comment in my arxiv tex source.
reply
fcpk
2 days ago
[-]
while exif might be bad for private photos, I do think research should not tamper with it unless there is a clear security rationale (ie private photos or things that are meant to o b hidden.. leave the data alone there...
reply
kmm
2 days ago
[-]
I sort of understand the reasoning on why Arxiv prefers tex to pdf[1], even though I feel it's a bit much to make it mandatory to submit the original tex file if they detect a submitted pdf was produced from one. But I've never understood what the added value is in hosting the source publicly.

Though I have to admit, when I was still in academia, whenever I saw a beautiful figure or formatting in a preprint, I'd often try to take some inspiration from the source for my own work, occasionally learning a new neat trick or package.

1: https://info.arxiv.org/help/faq/whytex.html

reply
irowe
2 days ago
[-]
A huge value in having authors upload the original source, is it divorces the content from the presentation (mostly). That the original sources were available was sufficient for a large majority of the corpus to be automatically rendered into HTML for easier reading on many devices: https://info.arxiv.org/about/accessible_HTML.html. I don't think it would have been as simple if they had to convert PDFs.
reply